Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add initial benchmarking setup #3120

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

epochcoder
Copy link
Contributor

@epochcoder epochcoder commented Mar 25, 2024

Will execute on pull requests, and run all benchmarks against
master and post a comparison as a PR comment

This will allow the maintainers of MyBatis to have concrete results for important flows on every pull request measured against master, I believe this would greatly help with PR's that say, "I've improved performance"

Thanks to @gavlyukovskiy for the benchmarking pipeline setup

willie added 2 commits March 25, 2024 19:06
Will execute on pull requests, and run all benchmarks against
 master and post a comparison as a PR comment
@coveralls
Copy link

coveralls commented Mar 25, 2024

Coverage Status

coverage: 87.176% (+0.009%) from 87.167%
when pulling cefe77b on epochcoder:feature/mybatis-benchmarks-setup
into e9e9a29 on mybatis:master.

@epochcoder
Copy link
Contributor Author

epochcoder commented Mar 25, 2024

It ran the initial benchmarks on this PR, but failed to post the comments due to permissions.

I changed the event to be pull_request_target, so it will only start running on PR's after this is merged. (since it needs to be in master)

The reason for the event change is explained here

This is what it would have posted after the initial commit:

Note

These results are affected by shared workloads on GitHub runners. Use the results only to detect possible regressions, but always rerun on more stable machine before making any conclusions!

Benchmark results (pull-request, 1339c93)

Benchmark                                                           Mode  Cnt    Score   Error  Units
BasicBlogBenchmark.retrieveSingleBlog                               avgt    5    0.588 ± 0.003  us/op
BasicBlogBenchmark.retrieveSingleBlogUsingConstructorWithResultMap  avgt    5  104.569 ± 1.134  us/op

See https://github.com/mybatis/mybatis-3/pull/3120/checks?sha=1339c93d34e870d0794ec6de460841adee5a903c for where it tried to post ;-)

@harawata
Copy link
Member

harawata commented Mar 25, 2024

@epochcoder ,

Thank you for the PR, but I don't like the idea very much.

In terms of performance, our focus is mostly on the mapping part (i.e. setting parameters, getting column values).
Parsing config/mappers is a heavy task, but it occurs only once during the startup in production environment.

So, this benchmark will not reflect the effect on production usage correctly and could encourage micro-optimization (which, in most cases, is not a good thing).
If a PR is aimed at improving performance, the effect should be measured against particular (and realistic) usage scenarios (which requires fair amount of data usually).

@epochcoder
Copy link
Contributor Author

Thank you for the feedback @harawata, I should have been more clear, the included benchmark is just a dummy, this PR was more meant to showcase that we can have the possibility to include benchmarks in mybatis that matter, whatever that might mean for the project.

I've gone over a few past PRs and seen benchmarks here and there, and it's a pity that they cannot stay withing the code base and demonstrate particular improvements (or most importantly, regressions)

The benchmark I have on my branch of #101 reflects a more real scenario and definitely helped me ensure we do not affect current result mapping performance.

I do agree that we definitely do not want micro improvements! And that was also not supposed to be the purpose here.

But feel free to close if you think this was not a good idea ;-) and thanks for taking a look!

@harawata
Copy link
Member

@epochcoder ,

Thank you for the follow-up.
If it's measured against realistic usages, it's not necessarily a bad idea.
I'm still unsure about running resource-consuming benchmarks, but GitHub is strong enough, maybe?

And, in case we adopt this, I would also like to see memory usage which is equally important as execution speed (I assume it's not included in the current setup, but correct me if I am wrong).

@hazendaz @kazuki43zoo Any thoughts?

@gavlyukovskiy
Copy link

To share some experience, we've been using this setup for a couple of months on one of our open source projects, it helped us quite a lot for detecting regressions early and then investigating further the results. We run it on every PR (around 50 PRs so far) and currently have 10 benchmarks in total, which takes around 6 minutes in total to execute on both master and PR branches (though you could tune that a bit to run less iterations or fewer benchmarks).
I think it is worth having the benchmarking setup for this project, even if you don't always run it on CI, but locally.

To reduce the noise, pipeline could also use labels / comments / manual dispatch as a trigger so that you manually trigger it when a PR is touching something that could affect the performance. We used that setup on one of our projects (not open-source, but also on GitHub). If that would be relevant for you, I could help with the implementation.

I'm still unsure about running resource-consuming benchmarks, but GitHub is strong enough, maybe?

From what I could find, the GitHub doesn't count this usage on open-source projects towards its GitHub Actions limits. Also, despite running on shared GitHub runners, the results were consistent (+-10%) for us, though it's better to rerun the benchmarks on some stable hardware when in doubt.

@epochcoder
Copy link
Contributor Author

epochcoder commented Mar 27, 2024

@harawata

And, in case we adopt this, I would also like to see memory usage which is equally important as execution speed (I assume it's not included in the current setup, but correct me if I am wrong).

Thats correct ;-) Ill add it to this PR. After #101 Merges, I can create a PR to drop these dummy benchmarks in favor of the ones I built in https://github.com/epochcoder/mybatis-3/commits/feature/101-jmh-performance-test/

Comment on lines 32 to 34
- uses: actions/checkout@v4
with:
path: benchmark-pull-request

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI we also started using pull_request_target recently and one difference we found is that for this event, the ref has to be explicit:
https://github.com/Breus/json-masker/blob/master/.github/workflows/benchmark.yml#L34

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this! ill update :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shamelessly added all of your updates in here ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants