Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run and fix tox -e regen to prepare 5.4 #6901

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions doc/en/assert.rst
Expand Up @@ -47,6 +47,8 @@ you will see the return value of the function call:
E + where 3 = f()

test_assert1.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert1.py::test_function - assert 3 == 4
============================ 1 failed in 0.12s =============================

``pytest`` has support for showing the values of the most common subexpressions
Expand Down Expand Up @@ -208,6 +210,8 @@ if you run this module:
E Use -v to get the full diff

test_assert2.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
============================ 1 failed in 0.12s =============================

Special comparisons are done for a number of cases:
Expand Down Expand Up @@ -279,6 +283,8 @@ the conftest file:
E vals: 1 != 2

test_foocompare.py:12: AssertionError
========================= short test summary info ==========================
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
1 failed in 0.12s

.. _assert-details:
Expand Down
2 changes: 2 additions & 0 deletions doc/en/builtin.rst
Expand Up @@ -137,9 +137,11 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
tmpdir_factory [session scope]
Return a :class:`_pytest.tmpdir.TempdirFactory` instance for the test session.


tmp_path_factory [session scope]
Return a :class:`_pytest.tmpdir.TempPathFactory` instance for the test session.


tmpdir
Return a temporary directory path object
which is unique to each test function invocation,
Expand Down
17 changes: 15 additions & 2 deletions doc/en/cache.rst
Expand Up @@ -75,6 +75,9 @@ If you run this for the first time you will see two failures:
E Failed: bad luck

test_50.py:7: Failed
========================= short test summary info ==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
2 failed, 48 passed in 0.12s

If you then run it with ``--lf``:
Expand All @@ -86,7 +89,7 @@ If you then run it with ``--lf``:
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 50 items / 48 deselected / 2 selected
collected 2 items
run-last-failure: rerun previous 2 failures

test_50.py FF [100%]
Expand Down Expand Up @@ -114,7 +117,10 @@ If you then run it with ``--lf``:
E Failed: bad luck

test_50.py:7: Failed
===================== 2 failed, 48 deselected in 0.12s =====================
========================= short test summary info ==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
============================ 2 failed in 0.12s =============================

You have run only the two failing tests from the last run, while the 48 passing
tests have not been run ("deselected").
Expand Down Expand Up @@ -158,6 +164,9 @@ of ``FF`` and dots):
E Failed: bad luck

test_50.py:7: Failed
========================= short test summary info ==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
======================= 2 failed, 48 passed in 0.12s =======================

.. _`config.cache`:
Expand Down Expand Up @@ -230,6 +239,8 @@ If you run this command for the first time, you can see the print statement:
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup ---------------------------
running expensive computation...
========================= short test summary info ==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s

If you run it a second time, the value will be retrieved from
Expand All @@ -249,6 +260,8 @@ the cache and nothing will be printed:
E assert 42 == 23

test_caching.py:20: AssertionError
========================= short test summary info ==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s

See the :fixture:`config.cache fixture <config.cache>` for more details.
Expand Down
2 changes: 2 additions & 0 deletions doc/en/capture.rst
Expand Up @@ -100,6 +100,8 @@ of the failing function and hide the other one:
test_module.py:12: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>
========================= short test summary info ==========================
FAILED test_module.py::test_func2 - assert False
======================= 1 failed, 1 passed in 0.12s ========================

Accessing captured output from a test function
Expand Down
7 changes: 7 additions & 0 deletions doc/en/example/markers.rst
Expand Up @@ -715,6 +715,9 @@ We can now use the ``-m option`` to select one set:
test_module.py:8: in test_interface_complex
assert 0
E assert 0
========================= short test summary info ==========================
FAILED test_module.py::test_interface_simple - assert 0
FAILED test_module.py::test_interface_complex - assert 0
===================== 2 failed, 2 deselected in 0.12s ======================

or to select both "event" and "interface" tests:
Expand Down Expand Up @@ -743,4 +746,8 @@ or to select both "event" and "interface" tests:
test_module.py:12: in test_event_simple
assert 0
E assert 0
========================= short test summary info ==========================
FAILED test_module.py::test_interface_simple - assert 0
FAILED test_module.py::test_interface_complex - assert 0
FAILED test_module.py::test_event_simple - assert 0
===================== 3 failed, 1 deselected in 0.12s ======================
4 changes: 4 additions & 0 deletions doc/en/example/nonpython.rst
Expand Up @@ -41,6 +41,8 @@ now execute the test specification:
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAILED test_simple.yaml::hello
======================= 1 failed, 1 passed in 0.12s ========================

.. regendoc:wipe
Expand Down Expand Up @@ -77,6 +79,8 @@ consulted when reporting in ``verbose`` mode:
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAILED test_simple.yaml::hello
======================= 1 failed, 1 passed in 0.12s ========================

.. regendoc:wipe
Expand Down
17 changes: 10 additions & 7 deletions doc/en/example/parametrize.rst
Expand Up @@ -73,6 +73,8 @@ let's run the full monty:
E assert 4 < 4

test_compute.py:4: AssertionError
========================= short test summary info ==========================
FAILED test_compute.py::test_compute[4] - assert 4 < 4
1 failed, 4 passed in 0.12s

As expected when running the full range of ``param1`` values
Expand Down Expand Up @@ -343,6 +345,8 @@ And then when we run the test:
E Failed: deliberately failing for demo purposes

test_backends.py:8: Failed
========================= short test summary info ==========================
FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f...
1 failed, 1 passed in 0.12s

The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
Expand Down Expand Up @@ -457,6 +461,8 @@ argument sets to use for each test function. Let's run it:
E assert 1 == 2

test_parametrize.py:21: AssertionError
========================= short test summary info ==========================
FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2
1 failed, 2 passed in 0.12s

Indirect parametrization with multiple fixtures
Expand All @@ -478,11 +484,8 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest

. $ pytest -rs -q multipython.py
ssssssssssss...ssssssssssss [100%]
========================= short test summary info ==========================
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.7' not found
3 passed, 24 skipped in 0.12s
........................... [100%]
27 passed in 0.12s

Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
Expand Down Expand Up @@ -607,13 +610,13 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 17 items / 14 deselected / 3 selected
collecting ... collected 14 items / 11 deselected / 3 selected

test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]

=============== 2 passed, 14 deselected, 1 xfailed in 0.12s ================
=============== 2 passed, 11 deselected, 1 xfailed in 0.12s ================

As the result:

Expand Down
49 changes: 47 additions & 2 deletions doc/en/example/reportingdemo.rst
Expand Up @@ -436,7 +436,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
items = [1, 2, 3]
print("items is {!r}".format(items))
> a, b = items.pop()
E TypeError: 'int' object is not iterable
E TypeError: cannot unpack non-iterable int object

failure_demo.py:181: TypeError
--------------------------- Captured stdout call ---------------------------
Expand Down Expand Up @@ -516,7 +516,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
def test_z2_type_error(self):
items = 3
> a, b = items
E TypeError: 'int' object is not iterable
E TypeError: cannot unpack non-iterable int object

failure_demo.py:222: TypeError
______________________ TestMoreErrors.test_startswith ______________________
Expand Down Expand Up @@ -650,4 +650,49 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a

failure_demo.py:282: AssertionError
========================= short test summary info ==========================
FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6
FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43
FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54
FAILED failure_demo.py::TestFailing::test_not - assert not 42
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - Assert...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list
FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse...
FAILED failure_demo.py::test_attribute - assert 1 == 2
FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ...
FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get...
FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ...
FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid lit...
FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT ...
FAILED failure_demo.py::TestRaises::test_raise - ValueError: demo error
FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not eno...
FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it
FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'na...
FAILED failure_demo.py::test_dynamic_compile_shows_nicely - AssertionError
FAILED failure_demo.py::TestMoreErrors::test_complex_error - assert 44 == 43
FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError...
FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: c...
FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError:...
FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - Assertio...
FAILED failure_demo.py::TestMoreErrors::test_global_func - assert False
FAILED failure_demo.py::TestMoreErrors::test_instance - assert 42 != 42
FAILED failure_demo.py::TestMoreErrors::test_compare - assert 11 < 5
FAILED failure_demo.py::TestMoreErrors::test_try_finally - assert 1 == 0
FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - Assertion...
FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionEr...
FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - Assertion...
============================ 44 failed in 0.12s ============================
23 changes: 22 additions & 1 deletion doc/en/example/simple.rst
Expand Up @@ -65,6 +65,8 @@ Let's run this without supplying our new option:
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s

And now with supplying a command line option:
Expand All @@ -89,6 +91,8 @@ And now with supplying a command line option:
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s

You can see that the command line option arrived in our test. This
Expand Down Expand Up @@ -261,6 +265,8 @@ Let's run our little function:
E Failed: not configured: 42

test_checkconfig.py:11: Failed
========================= short test summary info ==========================
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
1 failed in 0.12s

If you only want to hide certain exceptions, you can set ``__tracebackhide__``
Expand Down Expand Up @@ -443,7 +449,7 @@ Now we can profile which test functions execute the slowest:
========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.11s call test_some_are_slow.py::test_funcfast
0.10s call test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.12s =============================

incremental testing - test steps
Expand All @@ -461,6 +467,9 @@ an ``incremental`` marker which is to be used on classes:

# content of conftest.py

from typing import Dict, Tuple
import pytest

# store history of failures per test class name and per index in parametrize (if parametrize used)
_test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {}

Expand Down Expand Up @@ -669,6 +678,11 @@ We can run this:
E assert 0

a/test_db2.py:2: AssertionError
========================= short test summary info ==========================
FAILED test_step.py::TestUserHandling::test_modification - assert 0
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
ERROR b/test_error.py::test_root
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============

The two test modules in the ``a`` directory see the same ``db`` fixture instance
Expand Down Expand Up @@ -758,6 +772,9 @@ and run them:
E assert 0

test_module.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_fail1 - assert 0
FAILED test_module.py::test_fail2 - assert 0
============================ 2 failed in 0.12s =============================

you will have a "failures" file which contains the failing test ids:
Expand Down Expand Up @@ -873,6 +890,10 @@ and run it:
E assert 0

test_module.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_call_fails - assert 0
FAILED test_module.py::test_fail2 - assert 0
ERROR test_module.py::test_setup_fails - assert 0
======================== 2 failed, 1 error in 0.12s ========================

You'll see that the fixture finalizers could use the precise reporting
Expand Down