Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

step "Mapping coverage data to source" takes 6 minutes per test on a small project under llvm engine #1385

Open
zacchiro opened this issue Sep 17, 2023 · 1 comment
Assignees

Comments

@zacchiro
Copy link

Howdy, I love tarpaulin!, thanks for maintaining it.

On the webgraph-rs project I'm hitting a slowness issue which I can't explain; it's reported in that project as issue: vigna/webgraph-rs#50

The project is ~6 kLOC of code, although it has larger dependencies.
The project test suite takes ~30 seconds to run on my laptop in debug mode under cargo t.
When running it under cargo tarpaulin, however, it jumps to almost 2 hours (!). When looking at why, it appears that each tarpaulin "Mapping coverage data to source", run after each test, takes 6 minutes (!). And of course they add up, making tarpaulin unusable on the project.

Steps to reproduce:

$ git clone https://github.com/vigna/webgraph-rs
$ cd webgraph-rs/
$ cargo tarpaulin --engine llvm

I've straced the process: it seems to be doing pure CPU work for 6 minutes, with no significant I/O.

The problem seems to be specific to the llvm engine, it goes away without --engine llvm (but we are using the llvm in general for other related projects, so it's a bit annoying to have non-comparable figures by not using it on webgraph-rs).

Any idea what might be going on?

@xd009642
Copy link
Owner

So there's some tracking issue in cargo for project specific linker flags, basically coverage instrumentation gets added for every single dependency in the project, and then the parsing for that can become very slow. I did some benchmarks of my parsing and it was faster for a few examples but I haven't looked too deeply into it or parallelising it so maybe there's a lot of work to be done.

Maybe adding the --release flag could offer some further gains, just by more aggressively optimising and removing some of the counters as a result. I'll have a look at the project next week and see if I spot anything

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants