-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark #863
Comments
Another approach we could take is to check the duration time of one of our pytests (https://howchoo.com/python/how-to-measure-unit-test-execution-times-in-pytest) and the fail if it exceeds some threshold. Kind of like a time-based regression test. |
We have something like that in MNE, I can implement if you want. If you can point to which test(s) you want timed that would help. |
I think this one is the first and simplest build: pydata-sphinx-theme/tests/test_build.py Line 55 in fb0a4e7
So maybe after running our test suite, we just run that one like 5 times, take the median, and then check it hasn't exceeded some amount? |
To more quickly identify performance regressions we should add some benchmarking tests. This has been suggested before (see #381 (comment) for example). And maybe we could have caught #855 earlier in the release process for 0.10.
We could build a packages documentation (e.g., NetworkX since it is pure Python) with the main branch every week or so (if there have been any commits since the last run of the benchmarking suite). @MridulS has been looking into benchmarking for NetworkX, so he may have some input (or may be willing to help).
The text was updated successfully, but these errors were encountered: