New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only fail on warnings from Jupyter packages #877
base: main
Are you sure you want to change the base?
Conversation
avoid failing tests on use of deprecations out of our control
@@ -99,7 +99,7 @@ timeout = 300 | |||
# Restore this setting to debug failures | |||
# timeout_method = "thread" | |||
filterwarnings = [ | |||
"error", | |||
"error:::jupyter.*", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is lazy, but catches packages that start with jupyter
, which most of ours do. We could focus specifically on this package with jupyter_server
, or explicitly list one or more exact packages we want to be strict with
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree with this approach, as it will mask actual incompatibilities and deprecation warnings. The allowlist is for known warnings that we cannot work around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear: mask actual incompatibilities that may well not be relevant that we cannot control, and that should hopefully be addressed by those packages in a reasonable amount of time.
Surfacing this info is good (warnings already do this), but errors block all progress when nothing's actually broken.
For instance, I can't run the serverapp tests locally because I have pytest-asyncio installed to work with other packages, and it throws a deprecation warning when asyncio_mode is not set. But it's not set because serverapp doesn't even use it, though serverapp's warning policy treats this as a failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the right solution then is to update the contributing guide to show how to disable warnings (as we do with the minimum version test): pytest -vv -W default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, maybe we use something like pytest -vv -W error
the regular CI tests, and document how to to that locally to reproduce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think running the prerelease tests with -Werror is a good idea, since it serves a similar purpose: peeking into the future for things that will fail, but haven't yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you like me to take over this PR and update the workflows/docs? I like the idea of preserving the errors for our own warnings, I think we should keep that change.
avoid failing tests on use of deprecations out of our control
Potentially adding a strict warnings test run makes sense, but this prevents us from needing updates from dependencies before tests can run when there are no actual problems (#876)