Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove benchmark projects which served their purpose #16490

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

T-Gro
Copy link
Member

@T-Gro T-Gro commented Jan 4, 2024

This cleanup is to avoid any sunk costs maintaining benchmark projects which served their purpose already, they will be available in GH history if they are needed.

It is important to keep exemplary setup steps (e.g. how to benchmark with locally built FSharp.Core, with or without running build scripts, etc.), but this is not the case of benchmarks being deleted in this PR.
In some of them, the important setup steps have been actually deleted during cleanup attempts.

The ones which do show specific setup steps and are educational from their setup perspective (and not from the functions they are testing) were kept, the same for more overarching themes (e.g. comparing compiler performance across nuget versions, also kept).

TLDR:
A living body of code (= needing maintenance) for showing how BDN works on a single-purpose feature does not need to stay in the codebase forever.

Copy link
Contributor

github-actions bot commented Jan 4, 2024

✅ No release notes required

@vzarytovskii
Copy link
Member

Task perf benchmarks have very good examples of state machine usages, such as corouties or async2. These will need to be preserved somewhere.

@T-Gro
Copy link
Member Author

T-Gro commented Jan 4, 2024

Task perf benchmarks have very good examples of state machine usages, such as corouties or async2. These will need to be preserved somewhere.

It is in the RFC for resumable code, which is a more apparent source for finding info on a specific feature compared to a benchmark.

I have never attempted to write a coroutine, so if the RFC sample is not sufficient, someone should tell.

@vzarytovskii
Copy link
Member

Task perf benchmarks have very good examples of state machine usages, such as corouties or async2. These will need to be preserved somewhere.

It is in the RFC for resumable code, which is a more apparent source for finding info on a specific feature compared to a benchmark.

I have never attempted to write a coroutine, so if the RFC sample is not sufficient, someone should tell.

Not sure if they match 1:1, I would expect they're not. But having those somewhere as examples should be good

@T-Gro T-Gro marked this pull request as ready for review January 4, 2024 12:32
@T-Gro T-Gro requested a review from a team as a code owner January 4, 2024 12:32
@T-Gro
Copy link
Member Author

T-Gro commented Jan 4, 2024

Task perf benchmarks have very good examples of state machine usages, such as corouties or async2. These will need to be preserved somewhere.

It is in the RFC for resumable code, which is a more apparent source for finding info on a specific feature compared to a benchmark.
I have never attempted to write a coroutine, so if the RFC sample is not sufficient, someone should tell.

Not sure if they match 1:1, I would expect they're not. But having those somewhere as examples should be good

I can extend the RFC with them (e.g. standalone files next to RFC and linked from it), seems like an appropriate place to search for people who want to write their own resumable state machine.

@psfinaki
Copy link
Member

psfinaki commented Jan 4, 2024

I would like to keep at least something from CompiledCodeBenchmarks, the mechanism of comparing C# to F# perf is helpful and can server as a guidance - and there were cases when they were copypasted/extended, e.g. here.

Doesn't have to be all of it (can be just one project or both can be reduced), although it's all quite low maintenance cost.

Otherwise I am fine with the changes.

@T-Gro
Copy link
Member Author

T-Gro commented Jan 4, 2024

I would like to keep at least something from CompiledCodeBenchmarks, the mechanism of comparing C# to F# perf is helpful and can server as a guidance - and there were cases when they were copypasted/extended, e.g. here.

Doesn't have to be all of it (can be just one project or both can be reduced), although it's all quite low maintenance cost.

Otherwise I am fine with the changes.

To be honest there is nothing special about it at all, this is not the place to act as a documentation on how to use BDN API.

As written in the description, the real obstacles in compiler benchmarking is knowing how to set up stuff to test locally developed changes across the toolchain (which might use msbuild, compiler service for editor tooling, large projects with dependencies etc.).

These benchmarks do not show anything of that and their structure just mimics standard F# code benchmarks which can be found across github just fine (e.g. stuff from FastFSharp/Matthe Crews).

@psfinaki
Copy link
Member

psfinaki commented Jan 4, 2024

Well these benchmarks are different from other benchmarks we have, hence they're special 👀

I don't know about similar stuff really, I haven't heard / forgot about Fast F# and I don't know where to get started there (I see the web but I cannot find the code easily and the search is broken).

On the other hand, if someone comes to enhance or speed up the F# core again, they would now just copypaste and modify a file in these projects without the need of figuring out where to put the benchmarks and how to structure them. It's in our interest to make benchmarking as easy as possible. Of what we have there, MicroPerf seems more useful to me, and it is a smaller and much less messy thing than TaskPerf.

I have recently learned a lot about F# core just executing all these benchmarks locally. I am okay to be outvoted here, just saying that I have a newcomer perspective and from that perspective this was of some use.

@T-Gro
Copy link
Member Author

T-Gro commented Jan 4, 2024

On the other hand, if someone comes to enhance or speed up the F# core again, they would now just copypaste and modify

This is simply not true.
There were several PRs improving or building out FSharp.Core in the last year and none of them did it, because a single-purpose benchmark belongs just to the PR, not into the codebase.

I did several of those myself and the existing ones helped me exactly 0 times, and I knew about their existence.

The easy parts, which internet already has information about:

  • How to use BDN
  • How to make calls in F#

The difficult parts, which the deleted benchmarks do not help with anyway:

  • How to setup things specific to the compiler to make sure that benchmarks are using the locally built FSharp.Core and not the published one
  • How to compare published FSharp.Core vs. a local version of it, in a single comparative benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants