Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add simple pre-commit config #205

Merged
merged 4 commits into from Nov 27, 2021
Merged

Conversation

finswimmer
Copy link
Contributor

This PR adds a basic pre-commit config. More pre-commit checks and configuration should be discussed.

To run the checks on a PR you have to give https://pre-commit.ci access to the repository. Checks will then run automatically on the next push

Closes: #197

Copy link
Collaborator

@cooperlees cooperlees left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a good start! Thanks. If you're planning to do more (I see this is draft) I'll leave it or we can merge and work on it.

@finswimmer
Copy link
Contributor Author

Hey @cooperlees,

I'v updated the DEVELOPMENT.md to show how to run the linter, removed black from the dependencies in the setup.py and also removed from the black check in the github workflow, because it's running now with the pre-commit checks.

I'm not sure why the tests on python3.9 fail. Locally there is no problem. Any ideas?

fin swimmer

@finswimmer finswimmer marked this pull request as ready for review November 22, 2021 18:46
@cooperlees
Copy link
Collaborator

cooperlees commented Nov 22, 2021

Yeah this is a weird fuzx test one - I checked on main to see if I could reproduce to I was unable to:

python3 -V
Python 3.9.7
python3 -m venv --upgrade-deps /tmp/tf
/tmp/tf/bin/pip install -e .[dev]
[cooper@cooper-fedora-MJ0BWXDF flake8-bugbear]$ /tmp/tf/bin/coverage run tests/test_bugbear.py
.............................
----------------------------------------------------------------------
Ran 29 tests in 3.307s

OK
[cooper@cooper-fedora-MJ0BWXDF flake8-bugbear]$ /tmp/tf/bin/coverage report -m
Name         Stmts   Miss  Cover   Missing
------------------------------------------
bugbear.py     403     13    97%   65-66, 76, 100-110, 322, 349-352, 360
------------------------------------------
TOTAL          403     13    97%

Fail is from test_does_not_crash_on_any_valid_code:
SystemError: Negative size passed to PyUnicode_New

Applying it to main + trying to reproduce I get an error as your code has changed things - Will try with your branch if I get time today.

[cooper@cooper-fedora-MJ0BWXDF flake8-bugbear]$ git diff
diff --git a/tests/test_bugbear.py b/tests/test_bugbear.py
index 0573e11..3dd0273 100644
--- a/tests/test_bugbear.py
+++ b/tests/test_bugbear.py
@@ -6,7 +6,7 @@ import subprocess
 import sys
 import unittest
 
-from hypothesis import HealthCheck, given, settings
+from hypothesis import HealthCheck, given, settings, reproduce_failure
 from hypothesmith import from_grammar
 
 from bugbear import BugBearChecker, BugBearVisitor
@@ -328,6 +328,7 @@ class BugbearTestCase(unittest.TestCase):
 
 
 class TestFuzz(unittest.TestCase):
+    @reproduce_failure('6.27.0', b'AXicY2RkZGZgZGBgYGIBkXDAzAjmQsQAAhkAFQ==')
     @settings(suppress_health_check=[HealthCheck.too_slow])
     @given(from_grammar().map(ast.parse))
     def test_does_not_crash_on_any_valid_code(self, syntax_tree):
[cooper@cooper-fedora-MJ0BWXDF flake8-bugbear]$ /tmp/tf/bin/coverage run tests/test_bugbear.py
.........................E...
======================================================================
ERROR: test_does_not_crash_on_any_valid_code (__main__.TestFuzz)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/cooper/repos/flake8-bugbear/tests/test_bugbear.py", line 332, in test_does_not_crash_on_any_valid_code
    @settings(suppress_health_check=[HealthCheck.too_slow])
  File "/tmp/tf/lib64/python3.9/site-packages/hypothesis/core.py", line 1113, in wrapped_test
    raise DidNotReproduce(
hypothesis.errors.DidNotReproduce: The shape of the test data has changed in some way from where this blob was defined. Are you sure you're running the same test?

----------------------------------------------------------------------
Ran 29 tests in 1.127s

FAILED (errors=1)

@finswimmer
Copy link
Contributor Author

Hey @cooperlees,

could you find out what happens with this fuzzy test? I'm lost :(

fin swimmer

@cooperlees
Copy link
Collaborator

Yeah, me too. I think we can merge this tho and try work this out before releasing. This has been happening on other PRs too.

I'll try and get a local repro the next few days and nudge @Zac-HD for pointers on how to fix here ...

@Zac-HD
Copy link
Member

Zac-HD commented Nov 26, 2021

Ah, that's actually a CPython bug! https://bugs.python.org/issue45738

I just haven't shipped a workaround in hypothesmith yet 😅

@Zac-HD
Copy link
Member

Zac-HD commented Nov 27, 2021

Scratch that, Hypothesmith 0.2.0 is now out and works around this upstream bug 👍

@cooperlees
Copy link
Collaborator

Thanks @Zac-HD !

@cooperlees cooperlees merged commit 71091f9 into PyCQA:master Nov 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add script / precommit to drive CI
3 participants