Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no example run should mark test as skipped #3328

Closed
asmodehn opened this issue May 2, 2022 · 1 comment · Fixed by #3386
Closed

no example run should mark test as skipped #3328

asmodehn opened this issue May 2, 2022 · 1 comment · Fixed by #3386
Labels
legibility make errors helpful and Hypothesis grokable

Comments

@asmodehn
Copy link

asmodehn commented May 2, 2022

Currently ( hypothesis 6.45.1) when no example is run, the test is marked as success.
One obvious example:

from hypothesis import given, settings, example
import hypothesis.strategies as st



@given(my_int=st.integers())
@settings(phases=[])
@example(my_int=51)
def test_integers(my_int):
    assert my_int == 42
================================================= test session starts ==================================================
platform linux -- Python 3.9.7, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/alexv/tmp/hypofix
plugins: hypothesis-6.45.1
collected 1 item

no_test.py .                                                                                                     [100%]

================================================== 1 passed in 0.11s ===================================================

Instead the test should be marked as skipped, as it was never run.

I am not sure how feasible it is, as there are various combinations of `failures_to_reproduce x explicit_examples x phases_configured) that can bring about this behavior.

However when this kind of false positive happens, it is easy to get caught by it, when one is not deeply aware of the details of the test and its configuration...

@Zac-HD Zac-HD added the legibility make errors helpful and Hypothesis grokable label May 2, 2022
@Zac-HD
Copy link
Member

Zac-HD commented May 2, 2022

We can have Hypothesis increment an integer _n_valid_or_failing_examples (for example), and then check in the pytest plugin to skip if this is zero.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legibility make errors helpful and Hypothesis grokable
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants