New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch from skipping to xfailing some Python tests #1002
Conversation
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## main #1002 +/- ##
==========================================
+ Coverage 78.56% 80.90% +2.33%
==========================================
Files 77 77
Lines 4330 4330
Branches 778 778
==========================================
+ Hits 3402 3503 +101
+ Misses 756 649 -107
- Partials 172 178 +6
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
@@ -903,7 +903,6 @@ def test_ml_experiment(c, client, training_df): | |||
|
|||
# TODO - many ML tests fail on clusters without sklearn - can we avoid this? | |||
@xfail_if_external_scheduler | |||
@pytest.mark.skip(reason="Waiting on https://github.com/EpistasisLab/tpot/pull/1280") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @sarahyurick since this is your PR; are you aware of any other upstream changes that could've resolved these failures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, the only other solution is to restrict the NumPy version to be < 1.24.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! And should be good to go assuming conflicts are fixed and the tests are passing
Noticed some remaining
pytest.mark.skips
that weren't caught in #867; interested to see if any of these are passing now