New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove benchmark projects which served their purpose #16490
base: main
Are you sure you want to change the base?
Conversation
✅ No release notes required |
Task perf benchmarks have very good examples of state machine usages, such as corouties or async2. These will need to be preserved somewhere. |
It is in the RFC for resumable code, which is a more apparent source for finding info on a specific feature compared to a benchmark. I have never attempted to write a coroutine, so if the RFC sample is not sufficient, someone should tell. |
Not sure if they match 1:1, I would expect they're not. But having those somewhere as examples should be good |
I can extend the RFC with them (e.g. standalone files next to RFC and linked from it), seems like an appropriate place to search for people who want to write their own resumable state machine. |
I would like to keep at least something from CompiledCodeBenchmarks, the mechanism of comparing C# to F# perf is helpful and can server as a guidance - and there were cases when they were copypasted/extended, e.g. here. Doesn't have to be all of it (can be just one project or both can be reduced), although it's all quite low maintenance cost. Otherwise I am fine with the changes. |
To be honest there is nothing special about it at all, this is not the place to act as a documentation on how to use BDN API. As written in the description, the real obstacles in compiler benchmarking is knowing how to set up stuff to test locally developed changes across the toolchain (which might use msbuild, compiler service for editor tooling, large projects with dependencies etc.). These benchmarks do not show anything of that and their structure just mimics standard F# code benchmarks which can be found across github just fine (e.g. stuff from FastFSharp/Matthe Crews). |
Well these benchmarks are different from other benchmarks we have, hence they're special 👀 I don't know about similar stuff really, I haven't heard / forgot about Fast F# and I don't know where to get started there (I see the web but I cannot find the code easily and the search is broken). On the other hand, if someone comes to enhance or speed up the F# core again, they would now just copypaste and modify a file in these projects without the need of figuring out where to put the benchmarks and how to structure them. It's in our interest to make benchmarking as easy as possible. Of what we have there, MicroPerf seems more useful to me, and it is a smaller and much less messy thing than TaskPerf. I have recently learned a lot about F# core just executing all these benchmarks locally. I am okay to be outvoted here, just saying that I have a newcomer perspective and from that perspective this was of some use. |
This is simply not true. I did several of those myself and the existing ones helped me exactly 0 times, and I knew about their existence. The easy parts, which internet already has information about:
The difficult parts, which the deleted benchmarks do not help with anyway:
|
This cleanup is to avoid any sunk costs maintaining benchmark projects which served their purpose already, they will be available in GH history if they are needed.
It is important to keep exemplary setup steps (e.g. how to benchmark with locally built FSharp.Core, with or without running build scripts, etc.), but this is not the case of benchmarks being deleted in this PR.
In some of them, the important setup steps have been actually deleted during cleanup attempts.
The ones which do show specific setup steps and are educational from their setup perspective (and not from the functions they are testing) were kept, the same for more overarching themes (e.g. comparing compiler performance across nuget versions, also kept).
TLDR:
A living body of code (= needing maintenance) for showing how BDN works on a single-purpose feature does not need to stay in the codebase forever.