Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute 'async def' test functions #2175

Closed
wants to merge 1 commit into from
Closed

Execute 'async def' test functions #2175

wants to merge 1 commit into from

Conversation

youknowone
Copy link
Contributor

@youknowone youknowone commented Jan 6, 2017

pytest didn't use the result of test functions but async def functions return an awaitable object.

Though async def functions were just passed like they don't have any problem, their codes actually didn't run. For users, it was really hard to detect the tests really were run or not.

This patch makes to check the result of test functions and if it is an awaitable object then actually run the functions to ensure its code is run.


Thanks for submitting a PR, your contribution is really appreciated!

Here's a quick checklist that should be present in PRs:

  • Target: for bug or doc fixes, target master; for new features, target features;

Unless your change is trivial documentation fix (e.g., a typo or reword of a small section) please:

  • Make sure to include one or more tests for your change;
  • Add yourself to AUTHORS;
  • Add a new entry to CHANGELOG.rst
    • Choose any open position to avoid merge conflicts with other PRs.
    • Add a link to the issue you are fixing (if any) using RST syntax.
    • The pytest team likes to have people to acknowledged in the CHANGELOG, so please add a thank note to yourself ("Thanks @user for the PR") and a link to your GitHub profile. It may sound weird thanking yourself, but otherwise a maintainer would have to do it manually before or after merging instead of just using GitHub's merge button. This makes it easier on the maintainers to merge PRs.

@RonnyPfannschmidt
Copy link
Member

This patch discriminates against all non asyncio frameworks

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.06%) to 92.754% when pulling d3a8a95 on youknowone:execute-async into 64cb67b on pytest-dev:master.

@youknowone
Copy link
Contributor Author

I am very new to pytest. Any nice idea to solve the fundamental problem in a better way?
I think the biggest problem of current behavior is async def tests are always marked as passed.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.06%) to 92.754% when pulling d3a8a95 on youknowone:execute-async into 64cb67b on pytest-dev:master.

@RonnyPfannschmidt
Copy link
Member

pytest core should either trigger a pytest warning or a xfail on awaitable results

@hpk42 @nicoddemus please chime in, this is a fundamental decission

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.06%) to 92.754% when pulling d3aa85e on youknowone:execute-async into 64cb67b on pytest-dev:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.03%) to 92.786% when pulling 60c0099 on youknowone:execute-async into 64cb67b on pytest-dev:master.

@The-Compiler
Copy link
Member

IMHO it makes a lot of sense for pytest to support something which is in the Python stdlib, at least on such a low level.

@nicoddemus
Copy link
Member

nicoddemus commented Jan 6, 2017

I agree that the current state is not acceptable. At the very least we should issue a warning, or even fail with an error (I'm leaning towards the latter).

Also there's pytest-asyncio already. We might consider integrating it into the core, as @The-Compiler's point is a valid one.

Also, there are other questions to address, like for example the ability of using custom event loops; I will probably use quamash in the near-future for Qt applications.

@nicoddemus
Copy link
Member

For the short term, perhaps we should issue an error about awaitable objects not being supported at the moment, and point users towards pytest-asyncio? That might be even an acceptable long term solution.

@youknowone
Copy link
Contributor Author

youknowone commented Jan 7, 2017

I have a question about your discussion and a suggestion about pytest-asyncio.


The question: If pytest have to check the results of test functions to show errors, is there any reason not to run it to gain the actual result? I guess the operation cost for normal tests is almost same to showing error or gaining actual result if there is a test to check it. (Mostly, the test for result and branching)

pytest-asyncio still suggests more features. But I think pytest itself also can suggest basic features even without full integration. (Not sure. I don't know well about your policies.)


(Edited after the comment below)
About pytest-asyncio:

For now, users must attach @pytest.mark.asyncio to every coroutine tests. I don't think this is a human way. If you change pytest to generate fails rather than passes, it obviously will be better because it is noticeable, but still, it is not enough.

I think the behaviors of pytest and pytest-asyncio are ideal if:

  1. Pure pytest have to issue errors about coroutines to hint to install pytest-asyncio. (as @nicoddemus commented)
  2. If users install pytest-asyncio, it should work without @pytest.mark.asyncio.

@RonnyPfannschmidt
Copy link
Member

well - as of now, it is simply impossible to figure if a async function is for asyncio or not

and simply enforcing asyncio onto async functions of a different framework breaks peoples code

for example if a library exposes curio and asyncio and tests both
what you propose would completely break

@youknowone
Copy link
Contributor Author

Thanks for the explanation. Now I got what you meant by discrimination.

pytest didn't use the result of test functions but `async def` functions
return an awaitable object.

Though `async def` functions were just passed like they don't have any
problem, their codes actually didn't run. For users, it was really hard to
detect the tests really were run or not.

This patch makes to check the result of test functions and if it is an
awaitable object then actually run the functions to ensure its code is
run.
@coveralls
Copy link

Coverage Status

Coverage decreased (-0.03%) to 92.786% when pulling 22635b1 on youknowone:execute-async into 64cb67b on pytest-dev:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.05%) to 92.766% when pulling c57d535 on youknowone:execute-async into 64cb67b on pytest-dev:master.

@nicoddemus
Copy link
Member

Created #2224 for adding a warning when collecting awaitable objects, at least.

I was about to propose to close this PR because of the problem of supporting other async frameworks when it occurred to me: if we implement support for asyncio in the core, it does not mean other plugins like pytest-tornado can not used to test tornado functions. It seems all async plugins mark functions explicitly with a mark, so each test can be marked specifically with a mark. The implementation could follow the same approach of explicitly marking test functions, which is something pytest-asyncio already does.

On the other hand, we might want to keep async support as external plugins, in which case warning would make more sense.

@nicoddemus
Copy link
Member

What do you guys think? I would vote to add the warning explicitly mentioning the more popular async plugins as options (pytest-asyncio and pytest-tornado?) and closing this PR.

@RonnyPfannschmidt
Copy link
Member

closing this pr as we determined the correct action for py.test would be to warn or error out

still thanks for the proposal as it sparked the discussion and forced us to get to a good idea for how py.test core should handle this

@youknowone
Copy link
Contributor Author

Thanks, I will wait for the update. And thanks to everyone on this thread to consider my PR seriously and help me to catch what I was wrong :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants