Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simplecov inaccurate number of relevant lines when using parallel_tests #542

Closed
mattcsl opened this issue Dec 29, 2016 · 6 comments
Closed

Comments

@mattcsl
Copy link

mattcsl commented Dec 29, 2016

I am still getting very inaccurate rspec coverage when using parallel_tests vs single process rspec.
89% coverage with parallel_tests vs 67% with single process rspec.

Looking at the simplecov-html output many lines are dismissed as irrelevant and skipped.

For example with single process rspec a file has 150 lines 130 being relevant and 50 being missed whereas with parallel_tests the file has only 8 lines being relevant and 0 being missed for 100% coverage which is wrong.

Is there some trick to getting the results merged properly? I am using

config.before(:each) do
    SimpleCov.command_name "RSpec:#{Process.pid.to_s}#{ENV['TEST_ENV_NUMBER']}"
end

in my spec/spec_helper.rb

Here's the gems and version info

using parallel_tests (2.11.0)
rspec-core (3.5.4)
      rspec-support (~> 3.5.0)
    rspec-expectations (3.5.0)
      diff-lcs (>= 1.2.0, < 2.0)
      rspec-support (~> 3.5.0)
    rspec-mocks (3.5.0)
      diff-lcs (>= 1.2.0, < 2.0)
      rspec-support (~> 3.5.0)
    rspec-rails (3.5.2)
      actionpack (>= 3.0)
      activesupport (>= 3.0)
      railties (>= 3.0)
      rspec-core (~> 3.5.0)
      rspec-expectations (~> 3.5.0)
      rspec-mocks (~> 3.5.0)
      rspec-support (~> 3.5.0)
    rspec-support (3.5.0)

Generated by simplecov v0.12.0 and simplecov-html v0.10.0
using RSpec:23611, RSpec:236144, RSpec:236163, RSpec:236202

Thanks,
Matt

edit: edited for code readability! @PragTob

@PragTob
Copy link
Collaborator

PragTob commented Jan 29, 2017

Hi there,

thanks for your issue report.

I'm not quite sure and it's difficult to say from a distance.

In general it seems weird that more lines would be skipped with parallel_tests - the behaviour there should be consistent. Personally I've only seen cases where single process rspec reported more coverage than multiprocess (with command_name not set etc.)

From your setup, what I can say looks odd to me is setting the command_name in a before :each - we set it in the spec_helper and we set it once as it shouldn't change (I think we do so right in our Simplecov setup).

Also you shouldn't need the Pid of the rspec process as that should always be the same for one TEST_ENV_NUMBER - otherwise thinks loog good, it also seems to have used results from all 4 processes.

Are you sure that simplecov is started BEFORE everything else?

@aminariana
Copy link

Likely related to #559 , because if a semaphore is not used correctly, any result is possible.

@PragTob
Copy link
Collaborator

PragTob commented Feb 17, 2017

I'd have to review the code but I don't think there's a semaphore/mutex whatever in there since we run in separate processes not in different threads. Might still be a race condition though.

@aminariana
Copy link

aminariana commented Feb 17, 2017

As I explained in #559 , it looks like in archeology of SimpleCov, a Multi-threading (or -processing) Lock pattern existed in a previous PR #185 commit 4f3b4c7 that got lost to entropy. It probably needs to be brought back.

You can't have multiple processes merging results into the same result-set and not have some kind of a lock to protect them from corrupting the shared state.

@aminariana
Copy link

I've tried every lock-variation of SimpleCov: 0.8.0 to 0.8.2 - none of them work on a Mac environment.

@PragTob
Copy link
Collaborator

PragTob commented Aug 12, 2017

That lock was brought back in a recent PR and will be released soon --> should be good once we release a enw version/can try from master

@PragTob PragTob closed this as completed Aug 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants