Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous benchmark #102

Open
breezewish opened this issue Mar 3, 2022 · 3 comments
Open

Continuous benchmark #102

breezewish opened this issue Mar 3, 2022 · 3 comments
Milestone

Comments

@breezewish
Copy link
Member

As we care performance a lot, it may be necessary to run benchmarks for each PR submission.

@breezewish breezewish added this to the GA milestone Mar 3, 2022
@breezewish
Copy link
Member Author

We could try utilizing GitHub Action for continuous benchmarking, by comparing to relative regressions. Ref ethereumjs/ethereumjs-monorepo#897

@taqtiqa-mark
Copy link
Contributor

High priority IMO. Regressions are devilishly difficult to identify after the fact. The criterion author had a crate targeting CI use cases. Let me try to dig it up.

@taqtiqa-mark
Copy link
Contributor

IMO the approach to take is to create benchmarks using iai rather than criterion - not that you can't do both.
However, I believe if you want to accept/reject/evaluate the performance impact of a PR using github actions, or some such tooling, you need to abstract the hardware/virtualware which leaves one looking at iai.

Not sure if there are alternatives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants