Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run checks at the start of tests #414

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

adamchainz
Copy link
Member

Fixes #337. Checks are run at the start of tests, like Django does in 1.11+ after django/django#6294 was merged.

@adamchainz
Copy link
Member Author

adamchainz commented Nov 5, 2016

A few things need discussing/doing:

  1. Don't merge this until Fixed #25415 -- Made DiscoverRunner run system checks. django/django#6294 is merged 😄
  2. This currently breaks output. Pytest-django diverges from Django test runner by implementing the database as a pytest fixture that requires a database-touching test to set it up, rather than always setting it up. By running checks at this point, we get broken output:
tpkg/test_the_test.py::TestLiveServer::test_a System check identified no issues (0 silenced).
PASSED

The output is required to see the failing checks.

At YPlan our implementation has similar code to this PR in a fixture like:

@pytest.fixture(scope='session', autouse=True)
def django_check(request, django_db_setup, django_db_blocker):

pytest-django could make the DB autouse=True and then output from checks could appear before the first test run, but this would also slow down test runs that are purely non-database.

  1. I have yet to add a config option to disable it, a changelog note, or tests with xdist(?).

@pelme
Copy link
Member

pelme commented Nov 6, 2016

Very interesting feature! I often forget to run the Django checks myself...

pytest-django is designed to be usable where not the full test suite depends on Django, and being able to run a subset of tests and construct Django dependencies (DB) when needed. autouse=True for db would not really be feasible without breaking that and I think we should try to find another way around to make this usable. :)

We could look at pytest-pep8 or pytest-flakes for inspiration. We could use a command line option like --django-checks that would run checks produce a django check section and warn or fail the test run.

@adamchainz
Copy link
Member Author

Updated now I've updated my Django PR. I still haven't added a config option to disable it or a changelog note, but it's working the same way Django 1.11 will.

@blueyed
Copy link
Contributor

blueyed commented May 16, 2017

@adamchainz
There are CI failures.. do you have an update?

@adamchainz adamchainz force-pushed the issue_337_checks_at_start_of_tests branch from c97a9a5 to a5bc562 Compare May 16, 2017 22:18
@adamchainz
Copy link
Member Author

No, rebased on latest master and pushed because Travis had lost the logs. Will look at this agian, thanks for nudge

@MRigal
Copy link

MRigal commented Nov 23, 2017

Hi @adamchainz would be great to have this in!

@adamchainz adamchainz force-pushed the issue_337_checks_at_start_of_tests branch from a5bc562 to 7aa1ea0 Compare May 27, 2018 12:19
@adamchainz
Copy link
Member Author

I have (finally) had another look at this in the Djangocon EU 2018 sprints. Here's the status:

  • It works(*) on plain pytest, but the output comes without colours and because the failure happens during fixture setup, the first test still gets run after the flag to stop is set, so the output is a bit weird, with '1 test passed'
============================= test session starts ==============================
platform darwin -- Python 3.6.4, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
Django settings: tpkg.the_settings (from environment variable)
rootdir: /private/var/folders/6n/yj1qnxv52l7cl_zwh59_3nh00000gp/T/pytest-of-adam/pytest-18/test_failing_checks_fail_tests0, inifile:
plugins: xdist-1.15.0, django-3.2.2.dev6+g7aa1ea0.d20180527
collected 1 item

test_failing_checks_fail_tests.py .

SystemCheckError: System check identified some issues:

ERRORS:
?: (test.001) My failure message

System check identified 1 issue (0 silenced).
=========================== 1 passed in 0.07 seconds ===========================
  • It runs on xdist, but the only way I found of exiting with sys.exit causes pytest to print a massive traceback and thus the output from the checks is buried

I think I might need some help from the pytest team

@blueyed
Copy link
Contributor

blueyed commented Feb 26, 2019

It should likely use some more specific pytest hook, e.g. pytest_configure.

In general I think that running this separately in CI is better than with (possibly) every pytest invocation.

@adamchainz
Copy link
Member Author

In general I think that running this separately in CI is better than with (possibly) every pytest invocation.

Checks take milliseconds and can save a lot of painful debugging because they have clearer errors around misconfiguration. If they aren't run in tests, they can be missed in TDD workflows, since most users would only see them on runserver and might work on a change without actually running the server. I found this repeatedly in my job at the time and that's why I pushed to make it officially done nat start of tests in Django 1.11 (IIRC it was done accidentally in Django up until 1.9 but then a refactor dropped it in 1.10).

@blueyed
Copy link
Contributor

blueyed commented Feb 28, 2019

Ok. I'm ok with adding it, but there should be an option for it, defaulting to off in my opinion - but I would be happy enough if I could disable it.

Like you said already it probably needs to integrate differently with pytest.

@MRigal
Copy link

MRigal commented Apr 11, 2019

I still find it a very useful feature, defaulting to on in my opinion.

@adamchainz have you tried to make it a discoverable test? It could just run always as the first of the tests and add to the failures and/or stop when used with -x or --maxfail, a bit like https://pytest-ordering.readthedocs.io/en/develop/ works.
However I'm not sure how to have it also run when combining with -k or only on a selected file...

@blueyed
Copy link
Contributor

blueyed commented Apr 11, 2019

I think pytest_collection_modifyitems could always add an item.
But I'm not sure it should be run when -k or args in general are used then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants