New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to display markers in --collect-only
output (in order to grep
for skipped all skipped tests)
#1509
Comments
I'm pretty sure you could implement some pytest hook to check this (perhaps However, note that a test can call |
Marker deselection doesn't match logic based deselection, thus reopening |
The question is about seeing which tests have
What more needs to work in order to close this issue? If it's a new or enhanced feature that should probably have a new issue to describe the current and desired functionality. |
But this wont know if a skipif actually triggers |
So it seems this issue can relate to two separate behaviors: showing marks during @nitrocode could you chip in in on which behavior are you interested? |
@nicoddemus Hi everyone. The If in addition to that, you can also pass in a specific mark, that would be a bonus. |
Came here to file this feature request and spotted this ticket so commenting instead. I hacked this up with the following 5min change --- a/src/_pytest/terminal.py
+++ b/src/_pytest/terminal.py
@@ -783,7 +783,8 @@ class TerminalReporter:
self._tw.line("%s: %d" % (name, count))
else:
for item in items:
- self._tw.line(item.nodeid)
+ marks = {m.name for m in item.iter_markers() if m.name != "parametrize"}
+ self._tw.line(f"{item.nodeid} marks: {','.join(marks)}")
return
stack: List[Node] = []
indent = "" Would a polished up version of that be something that might be considered for inclusion (note again that ^ is basically the naivest possible solution but in general would the change to We quite heavily use markers and it would be good to be able to run pytest once in collect-only mode and then grep from the output. As opposed to running multiple times with different |
This looks like a thing that could work, We gotta figure how to show some things Aka markers vs own markers And whether to enable this with verbosity Also if/when to show /filter markers Lucky those problems can be solved in progression |
I propose as a starting point we start with something that's opt in with verbosity and fits your use case, then we can iterate |
Any progress with this? This could be quite a nice feature for test discovery purposes also. For example, we have a general framework for running various integration tests in aws lambda that are actually composed of multiple tests. The tests are provided as a zip package and we use test discovery to give view at UI for the user of the available test (sets). Some tests trigger asynchronous workloads and we just need to wait for them to be ready and don't want to wait in lambda but kind of pause the test and continue with the next one later. We utilize check pattern and delay with sqs queue to implement the functionality (I know there could be more elegant ways, but this is a way) and it would be nice to know what kind of waits the test requires. We can now mark like @pytest.mark.sleep600 for 600 s wait, but we would like to have different wait patterns and finding these with pytest --collect-only -m sleep600 -q (and finding for all possible waits, how many retries are supported etc) is not so nice. It would be nice to just have in pytest --collect-only some parameter to display all markers of the tests. |
--collect-only
output (in order to grep
for skipped all skipped tests)
We're skipping many of our tests and I think this is due to some missing functionality.
In order to see all the skipped tests, I have to run all the tests in verbose mode, store the output, and
grep -E 'xfail|xpass'
in order to see all of the specific tests that are skipped.If there isn't a better way to already do this. I was thinking what if in the
--collect-only
output we could also see markers likeskip
or even custom markers for each test. That way I could easily collect the tests and grep for skipped ones without having to run all of them again.The text was updated successfully, but these errors were encountered: