Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create fail_over Option #320

Open
krisdestruction opened this issue Aug 14, 2019 · 5 comments
Open

Create fail_over Option #320

krisdestruction opened this issue Aug 14, 2019 · 5 comments

Comments

@krisdestruction
Copy link

We support --cov-fail-under and fail_under options to fail tests when tests are under some coverage amount. #141

In an effort to asymptotically get to 100%, some repos may want --cov-fail-over and fail_over options to ensure the fail_under config is updated.

@blueyed
Copy link
Contributor

blueyed commented Aug 14, 2019

some repos may want --cov-fail-over and fail_over options to ensure the fail_under config is updated.

Really? So that you get the extra work of updating the lower limit in case you manage to improve coverage just a bit too much?

I rather suggest using services that a) ensure that the diff is covered enough, and b) that coverage does not drop in general, but only increases.

Basically you should use the current/previous percentage for fail_under always. This could also be done by storing it somewhere as an artifact on CI, when not using services like codecov that manage it for you.

@krisdestruction
Copy link
Author

krisdestruction commented Aug 16, 2019

Yes really, because working with legacy systems with lower coverage means that there's lots of room to improve. Can you give an example of such a service you're suggesting?

As for my feature request, I can't imagine a fail_over would be too code divergent than a fail_under

@blueyed
Copy link
Contributor

blueyed commented Aug 16, 2019

I suggest using https://codecov.io/ - it can provide status about the patch being covered, and also the project's coverage not dropping.
It defaults to having the patch covered by the amount of current coverage (IIRC), but can be made to check for it to be covered by e.g. 100%.

@ionelmc
Copy link
Member

ionelmc commented Aug 17, 2019 via email

@nedbat
Copy link
Collaborator

nedbat commented Sep 22, 2019

I suggest there's no need to add this to pytest-cov at all. Let pytest run the tests. Let coverage report on coverage. See #337.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants