Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Upgrading v6.0.1 to v6.0.2 increases instrumentation time #1649

Open
erichiller opened this issue Apr 11, 2024 · 6 comments
Open

[BUG] Upgrading v6.0.1 to v6.0.2 increases instrumentation time #1649

erichiller opened this issue Apr 11, 2024 · 6 comments
Labels
needs repro Needs repro to be investigated, cannot repro in local untriaged To be investigated

Comments

@erichiller
Copy link

After upgrading from v6.0.1 to v6.0.2 the maximum time it takes to instrument increases from 26 seconds in v6.0.1 to >1.5 minutes. (note: I'm measuring from test command began to first test starting)
This occurs when only changing the coverlet.collector package's version (otherwise identical repo).
Normally this would just be annoying, but since I'm using --blame-hang-timeout 90s when testing, the test host crashes before any tests begin. The problem is compounded because I'm using GitHub Actions and for whatever reason the process crash isn't detected and the test will keep running for as long the test runner lets it.

Going through the differences from v6.0.1 to v6.0.2 it looks like System.Text.Json use was reverted. I didn't see any specific code that appeared to be the culprit, however I am not familiar with coverlet's code.

Let me know if you'd like any other information.

Environment: .net8.0
Coverlet: v6.0.1 / v6.0.2
OS: Ubuntu 22.04
Arch: x64

@github-actions github-actions bot added the untriaged To be investigated label Apr 11, 2024
@daveMueller
Copy link
Collaborator

@erichiller thanks for reporting this. There is another performance issue reported #1646 for which already a PR exists. Maybe it is the same reason. I let you know once this is merged and you can give it a try with the nightly.

@daveMueller
Copy link
Collaborator

@erichiller The PR I mentioned is now merged and can be consumed with our nightly. Maybe you can give it a try. But I guess this here is another issue and as you have already noticed there weren't many changes between the two version v6.0.1 - v6.0.2.
We would really appreciate if somebody could provide a repro that shows those different instrumentation times? This would speed up the work on this issue.

@WyrmUK
Copy link

WyrmUK commented May 2, 2024

We've seen the same performance issue so this should be fairly easy to replicate. It's significant for large solutions. I would hazard a guess that the change back to Newtonsoft from System.Text.Json is the culprit (Newtonsoft is a lot slower for large and numerous objects). You may want to consider providing a specific .NET 8 version of the tool that uses the V8 System.Text.Json then in the build target determine which version to run depending on the installed framework.

@daveMueller
Copy link
Collaborator

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere.
I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

@daveMueller daveMueller added the needs repro Needs repro to be investigated, cannot repro in local label May 18, 2024
@piccit
Copy link

piccit commented May 19, 2024

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere. I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

In my own experience, it's files/sloc. Our pipeline runs tests per project, so I don't think amount of assemblies is a factor. I'll update with specific file and sloc count when I get back to my desk, but I was seeing about 3-5 minutes added to overall execution time (per project)

@WyrmUK
Copy link

WyrmUK commented May 20, 2024

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere. I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

In our case we have a solution containing 6 projects containing 12,000 lines of which 6,000 are coverable/covered with 1,200 branches. Each project has an associated unit test project.
We found the time taken to perform the tests increased from 1min 15s up to 2min 40sec.
We haven't looked any further as to whether it is the instrumentation or the report generation/merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs repro Needs repro to be investigated, cannot repro in local untriaged To be investigated
Projects
None yet
Development

No branches or pull requests

4 participants