Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic performance testing to the gate #1450

Open
kgriffs opened this issue Feb 20, 2019 · 4 comments · May be fixed by #1825
Open

Add basic performance testing to the gate #1450

kgriffs opened this issue Feb 20, 2019 · 4 comments · May be fixed by #1825

Comments

@kgriffs
Copy link
Member

kgriffs commented Feb 20, 2019

Create a basic performance test based on historical trends and add it to as a Travis job. We should consider failing the build in the face of a performance regression.

Our first pass at this can be very basic; we can always improve it over time.

@kgriffs kgriffs added needs contributor Comment on this issue if you'd like to volunteer to work on this. Thanks! maintenance labels Feb 20, 2019
@kgriffs kgriffs added this to the Version 3.0 milestone Feb 20, 2019
@vytas7
Copy link
Member

vytas7 commented Feb 20, 2019

Just a side note: I think this is both a great initiative 👍 , and at the same time a tricky task as Travis instances are known for rather wild fluctuations in throughput.

(See also travis-ci/travis-ci#352 -- not sure if the problem is still relevant to the same extent as of today as the thread is a bit old; the thread discussion has some interesting ideas btw)

@kgriffs
Copy link
Member Author

kgriffs commented May 5, 2019

That's true. We'd probably have to spin up our own boxes.

@kgriffs kgriffs modified the milestones: Version 3.0, Version 3.1 May 5, 2019
@vytas7 vytas7 mentioned this issue Nov 2, 2019
@vytas7 vytas7 self-assigned this Dec 12, 2020
@vytas7 vytas7 added in progress and removed needs contributor Comment on this issue if you'd like to volunteer to work on this. Thanks! labels Dec 12, 2020
@vytas7
Copy link
Member

vytas7 commented Dec 13, 2020

@njsmith has kindly pointed me to this article https://pythonspeed.com/articles/consistent-benchmarking-in-ci/ .
I did some cursory experiments, and (after getting past initial gotchas) it looked promising.

We could establish a handful of basic metrics with a relatively high precision, and add one or more CI gates to track them; using either sheer instruction counts as reported by valgrind, or adjusted scores as per https://github.com/pythonspeed/cachegrind-benchmarking .

We could probably start with CPython and CPython+Cython checks. PyPy JIT was not fluctuating that much as I feared either, but our microoptimizations are largely geared towards CPython anyway. PyPy checks could be added as a further improvement.

@CaselIT
Copy link
Member

CaselIT commented Dec 13, 2020

The sqlalchemy tests contains something along these lines, focused only on function call counts.

I can point to the relevant bits if it may be useful

@vytas7 vytas7 linked a pull request Dec 20, 2020 that will close this issue
9 tasks
@vytas7 vytas7 modified the milestones: Version 3.1, Version 3.2 Mar 13, 2022
@vytas7 vytas7 modified the milestones: Version 4.1, Version 4.0 Mar 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants