New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some benchmarks #1658
base: master
Are you sure you want to change the base?
Add some benchmarks #1658
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1658 +/- ##
=======================================
Coverage 88.51% 88.51%
=======================================
Files 17 17
Lines 2255 2255
=======================================
Hits 1996 1996
Misses 259 259
Continue to review full report at Codecov.
|
self.f.close() | ||
|
||
def time_virtual_dataset(self): | ||
self.f.create_virtual_dataset('vdata', self.layout, fillvalue=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked at all of these in detail, but does it make sense to time only this bit, leaving the layout/source bits in setup? For that matter, does it make sense to benchmark creating virtual datasets at all? I can probably make a good case that it's not particularly important for performance.
The more general point is that we need to design benchmarks that can tell us something interesting, which isn't always easy.
We don't seem to run the benchmarks as part of the CI, I think we should think about how/where/when we want to run these. |
I probably know how to do it. I'll try it in next days, and maybe we should also add an item in Sphinx documentation. |
I think benchmarks on CI are kind of tricky - you really want to run them on a consistent system with no other significant work competing for the CPU, memory or disk access, so that one run can be compared to the next. I don't think that's what any of the free CI services will offer, for obvious reasons. I've got a Raspberry Pi sitting around not doing much. Maybe I can get that set up to run benchmarks regularly. |
You are right. I think I should stop moving forward on this, because I realize that
Raspberry Pi YES 🚀 |
c95dc62
to
05b0294
Compare
move test cases into benchmarks directory
I wonder if we want to move the benchmarks to their own repository, as that will make it easier to run the latest set of benchmarks against different commits without having to copy files around? |
I think asv already takes care of running the same benchmark code against library code from different commits. It seems I can write a new benchmark and then run it against older commits, in any case. But I don't particularly have a good argument for having them in the same repo, if you think a separate repo is better. |
Cool, I didn't realise asv could do that. |
major changes:
TODO:
CC: @takluyver