New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Super-simple implementation of pytest contexts #283
Conversation
To try this out now, you'll need to get coverage.py from master. |
I wonder if this shouldn't be always on. Why would anyone not want the context info? |
It's slower, and the collected data is much larger. |
Coverage.py 5.0a5 is now out. Can we talk about how to make this change available to people? |
Yes! For reference: changelog: https://github.com/nedbat/coveragepy/blob/f9cb7becf69b299f5f946944d855b7e964afec8d/CHANGES.rst#version-50a5-----2019-05-07 So from what I've read there is nothing for reporting yet, right? What do you have in mind? I have not looked at the details, but I assume the context(s) are stored for each line, so you could select then all (test) items that get executes for a given line? |
5.0a5 is now on RTD. Right, there is nothing for reporting yet, though there is one PR open to add some filtering. The contexts are stored per line (or per-branch), so you can see what code was run in each context. Inserting new lines in the source will invalidate the data, which is unfortunate, but I don't see a way to avoid that at the moment. I'm confused by the docs for the pytest hooks. Can you help me find the right ones? I need to be called before the test runs. If we can't get the setup/teardown, that's OK, though it would be cool. |
I think using those 3 hooks should be fine.. they are called for setup/call/teardown. Called via https://github.com/pytest-dev/pytest/blob/2051e30b9b596e944524ccb787ed20f9f5be93e3/src/_pytest/runner.py#L190-L198 / https://github.com/pytest-dev/pytest/blob/2051e30b9b596e944524ccb787ed20f9f5be93e3/src/_pytest/runner.py#L82. Would be nice if there would be a single hook, where There is |
I've updated the code to use the correct hooks. Now the context is set before the code, so it's recorded properly. I should have realized if the test result was available in the hook, then it was too late! |
The context info would only show in the html report? |
It doesn't show in any reports yet. I don't know how people will want to use this information. In a real test suite, the number of contexts will be overwhelming, too many to report on all at once. I want to get this into people's hands and see what kind of use they will make of it. |
Would you imaging someone customizing the context? Like have the context be the test filename or maybe a marker name. |
Also, can you technically have multiple contexts at the same time? A test can have multiple markers. |
There are a few ways to customize the context. Anything could use the coverage API like we do here, and you could also write a coverage plugin to do it. Filename, class name, test function name, are all interesting possibilities. I haven't got support for multiple contexts. I haven't used markers much in my own test suites, so I'm not sure how they would fit in. You could write a context which was the concatenation of the markers, though that might be hard to use later. |
As for this PR: it should fall back to doing nothing / having a clear error when coverage.py < 5 is used As for coverage.py: it would be useful to have a helper function to get the context for a given filename and lineno. I've quickly came up with the following: SELECT c.context
-- , *
from context c
inner join arc a on a.context_id == c.id
inner join file on file.id = a.file_id
where path = "/path/to/file"
and (a.fromno != -1 and a.tono > 0)
and a.fromno >= 368
and a.tono <= 369
and c.context != ""
order by a.fromno |
Also (at least for me) it seems easier to use non-branch coverage for this, since it allows to more easily select by line-numbers - although it would be still useful there to see if it all branches are covered. For this some kind of view (if SQLite provides this) might be useful, to make this work for branch and non-branch coverage in the same way. My aim here is to get all tests that execute a particular line, to re-run only those, or merge/revisit them. I think it would be great to have this information available in the html reports: this allows for a) displaying the number of contexts (i.e. how often it was executed), and also the tests affecting/executing it. This could be displayed as a number next to the line number, with the list of contexts in a title "popup", possibly also linking to the source file(s), if some more advanced popup would be used. |
Anyway, so seeing the many unknowns here my opinion is that there shouldn't be any CLI option at all since it's:
My proposal is to redo this and just have a hook for enabling it from 3rd party pytest plugins that also deal with reporting. We should just have these 3 hooks available for plugin authors that want to do something with coverage contexts:
|
@blueyed the idea of a SQLite view is a good one, I will try that out. @ionelmc I don't understand all your concerns. Can we discuss them more?
I don't understand what isn't allowed. Can we extend this plugin to make them possible?
Coverage will gain new features. I don't think it's a reasonable strategy to freeze the UI for pytest-cov. You will want to support new features, won't you? We can decide on the most reasonable strategy for if a new feature is attempted on an old version of coverage. Plugins on top of plugins sounds messy and hard to maintain.
I don't understand what this means. |
@nedbat what do you think about the hooks? wouldn't those + the ability to add any cmdline option from a plugin adequately solve any possible usecase? |
There's no need for hooks into pytest-cov. If I have to ship my own plugin, I can use coverage.py APIs and do what is needed without pytest-cov. The advantage of pytest-cov is that it already is being used by people, is known, maintained, etc. |
I would really like to add to this plugin without layering another plugin on top. Are you really opposed to that? |
Oook ... fine. Maybe it's too early to have configurable context. We can add hooks later. We can merge this, just the changelog/authors and tests needs to be added. |
OK, great. I've had to make other changes to fully support Coverage 5.0. I'll prepare a new pull request with the changes. |
#319 is a work-in-progress for coverage 5.0 |
@@ -282,16 +284,25 @@ def pytest_terminal_summary(self, terminalreporter): | |||
terminalreporter.write(message, **markup) | |||
|
|||
def pytest_runtest_setup(self, item): | |||
if self.options.cov_context: | |||
context = "{item.nodeid}|setup".format(item=item) | |||
self.cov_controller.cov.switch_context(context) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think plugin should call cov directly. I think this is something that cov_controller should manage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eg pass options.cov_context here:
pytest-cov/src/pytest_cov/plugin.py
Lines 157 to 165 in 8a19ea5
self.cov_controller = controller_cls( | |
self.options.cov_source, | |
self.options.cov_report, | |
self.options.cov_config, | |
self.options.cov_append, | |
self.options.cov_branch, | |
config, | |
nodeid | |
) |
then run self.cov_controller.switch_context(item.nodeid, "setup")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback. I should close this PR: it will eventually be folded into #319 (where I could use some help understanding why tests are failing, actually... :) )
Hey there! Coverage.py 5.0a5 will have a method
Coverage.switch_context
to set the dynamic context externally. This is a simple proof-of-concept of using that method to set the pytest test id, phase, and status as the context.I haven't done anything about limiting this to Coverage 5.0a5, or writing tests yet.