2.0
You can invoke testing through the Python interpreter from the command line:
python -m pytest [...]
This is almost equivalent to invoking the command line script pytest [...]
directly, except that calling via python
will also add the current directory to sys.path
.
Running pytest
can result in six different exit codes:
- Exit code 0
All tests were collected and passed successfully
- Exit code 1
Tests were collected and run but some of the tests failed
- Exit code 2
Test execution was interrupted by the user
- Exit code 3
Internal error happened while executing tests
- Exit code 4
pytest command line usage error
- Exit code 5
No tests were collected
pytest --version # shows where pytest was imported from
pytest --fixtures # show available builtin function arguments
pytest -h | --help # show help on command line and config file options
To stop the testing process after the first (N) failures:
pytest -x # stop after first failure
pytest --maxfail=2 # stop after two failures
Pytest supports several ways to run and select tests from the command-line.
Run tests in a module
pytest test_mod.py
Run tests in a directory
pytest testing/
Run tests by keyword expressions
pytest -k "MyClass and not method"
This will run tests which contain names that match the given string expression, which can include Python operators that use filenames, class names and function names as variables. The example above will run TestMyClass.test_something
but not TestMyClass.test_method_simple
.
Run tests by node ids
Each collected test is assigned a unique nodeid
which consist of the module filename followed by specifiers like class names, function names and parameters from parametrization, separated by ::
characters.
To run a specific test within a module:
pytest test_mod.py::test_func
Another example specifying a test method in the command line:
pytest test_mod.py::TestClass::test_method
Run tests by marker expressions
pytest -m slow
Will run all tests which are decorated with the @pytest.mark.slow
decorator.
For more information see marks <mark>
.
Run tests from packages
pytest --pyargs pkg.testing
This will import pkg.testing
and use its filesystem location to find and run tests from.
Examples for modifying traceback printing:
pytest --showlocals # show local variables in tracebacks
pytest -l # show local variables (shortcut)
pytest --tb=auto # (default) 'long' tracebacks for the first and last
# entry, but 'short' style for the other entries
pytest --tb=long # exhaustive, informative traceback formatting
pytest --tb=short # shorter traceback format
pytest --tb=line # only one line per failure
pytest --tb=native # Python standard library formatting
pytest --tb=no # no traceback at all
The --full-trace
causes very long traces to be printed on error (longer than --tb=long
). It also ensures that a stack trace is printed on KeyboardInterrupt (Ctrl+C). This is very useful if the tests are taking too long and you interrupt them with Ctrl+C to find out where the tests are hanging. By default no output will be shown (because KeyboardInterrupt is caught by pytest). By using this option you make sure a trace is shown.
2.9
The -r
flag can be used to display test results summary at the end of the test session, making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.
Example:
$ pytest -ra
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======================= no tests ran in 0.12 seconds =======================
The -r
options accepts a number of characters after it, with a
used above meaning "all except passes".
Here is the full list of available characters that can be used:
f
- failedE
- errors
- skippedx
- xfailedX
- xpassedp
- passedP
- passed with outputa
- all exceptpP
More than one character can be used, so for example to only see failed and skipped tests, you can execute:
$ pytest -rfs
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
======================= no tests ran in 0.12 seconds =======================
Dropping to PDB (Python Debugger) on failures
Python comes with a builtin Python debugger called PDB. pytest
allows one to drop into the PDB prompt via a command line option:
pytest --pdb
This will invoke the Python debugger on every failure (or KeyboardInterrupt). Often you might only want to do this for the first failing test to understand a certain failure situation:
pytest -x --pdb # drop to PDB on first failure, then end test session
pytest --pdb --maxfail=3 # drop to PDB for first three failures
Note that on any failure the exception information is stored on sys.last_value
, sys.last_type
and sys.last_traceback
. In interactive use, this allows one to drop into postmortem debugging with any debug tool. One can also manually access the exception information, for example:
>>> import sys
>>> sys.last_traceback.tb_lineno
42
>>> sys.last_value
AssertionError('assert result == "ok"',)
Dropping to PDB (Python Debugger) at the start of a test
pytest
allows one to drop into the PDB prompt immediately at the start of each test via a command line option:
pytest --trace
This will invoke the Python debugger at the start of every test.
To set a breakpoint in your code use the native Python import pdb;pdb.set_trace()
call in your code and pytest automatically disables its output capture for that test:
- Output capture in other tests is not affected.
- Any prior test output that has already been captured and will be processed as such.
- Any later output produced within the same test will not be captured and will instead get sent directly to
sys.stdout
. Note that this holds true even for test output occurring after you exit the interactive PDB tracing session and continue with the regular test run.
Python 3.7 introduces a builtin breakpoint()
function. Pytest supports the use of breakpoint()
with the following behaviours:
- When
breakpoint()
is called andPYTHONBREAKPOINT
is set to the default value, pytest will use the custom internal PDB trace UI instead of the system defaultPdb
.- When tests are complete, the system will default back to the system
Pdb
trace UI.- If
--pdb
is called on execution of pytest, the custom internal Pdb trace UI is used on bothbreakpoint()
and failed tests/unhandled exceptions.- If
--pdbcls
is used, the custom class debugger will be executed when a test fails (as expected within existing behaviour), but also whenbreakpoint()
is called from within a test, the custom class debugger will be instantiated.
To get a list of the slowest 10 test durations:
pytest --durations=10
By default, pytest will not show test durations that are too small (<0.01s) unless -vv
is passed on the command-line.
To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:
pytest --junitxml=path
to create an XML file at path
.
3.1
To set the name of the root test suite xml item, you can configure the junit_suite_name
option in your config file:
[pytest]
junit_suite_name = my_suite
2.8
3.5
Fixture renamed from record_xml_property
to record_property
as user properties are now available to all reporters. record_xml_property
is now deprecated.
If you want to log additional information for a test, you can use the record_property
fixture:
def test_function(record_property):
record_property("example_key", 1)
assert True
This will add an extra property example_key="1"
to the generated testcase
tag:
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
<properties>
<property name="example_key" value="1" />
</properties>
</testcase>
Alternatively, you can integrate this functionality with custom markers:
# content of conftest.py
def pytest_collection_modifyitems(session, config, items):
for item in items:
for marker in item.iter_markers(name="test_id"):
test_id = marker.args[0]
item.user_properties.append(("test_id", test_id))
And in your tests:
# content of test_function.py
import pytest
@pytest.mark.test_id(1501)
def test_function():
assert True
Will result in:
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
<properties>
<property name="test_id" value="1501" />
</properties>
</testcase>
Warning
record_property
is an experimental feature and may change in the future.
Also please note that using this feature will break any schema verification. This might be a problem when used with some CI servers.
3.4
To add an additional xml attribute to a testcase element, you can use record_xml_attribute
fixture. This can also be used to override existing values:
def test_function(record_xml_attribute):
record_xml_attribute("assertions", "REQ-1234")
record_xml_attribute("classname", "custom_classname")
print("hello world")
assert True
Unlike record_property
, this will not add a new child element. Instead, this will add an attribute assertions="REQ-1234"
inside the generated testcase
tag and override the default classname
with "classname=custom_classname"
:
<testcase classname="custom_classname" file="test_function.py" line="0" name="test_function" time="0.003" assertions="REQ-1234">
<system-out>
hello world
</system-out>
</testcase>
Warning
record_xml_attribute
is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept, however.
Using this over record_xml_property
can help when using ci tools to parse the xml report. However, some parsers are quite strict about the elements and attributes that are allowed. Many tools use an xsd schema (like the example below) to validate incoming xml. Make sure you are using attribute names that are allowed by your parser.
Below is the Scheme used by Jenkins to validate the XML report:
<xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="assertions" type="xs:string" use="optional"/>
<xs:attribute name="time" type="xs:string" use="optional"/>
<xs:attribute name="classname" type="xs:string" use="optional"/>
<xs:attribute name="status" type="xs:string" use="optional"/>
</xs:complexType>
</xs:element>
3.0
If you want to add a properties node in the testsuite level, which may contains properties that are relevant to all testcases you can use LogXML.add_global_properties
import pytest
@pytest.fixture(scope="session")
def log_global_env_facts(f):
if pytest.config.pluginmanager.hasplugin("junitxml"):
my_junit = getattr(pytest.config, "_xml", None)
my_junit.add_global_property("ARCH", "PPC")
my_junit.add_global_property("STORAGE_TYPE", "CEPH")
@pytest.mark.usefixtures(log_global_env_facts.__name__)
def start_and_prepare_env():
pass
class TestMe(object):
def test_foo(self):
assert True
This will add a property node below the testsuite node to the generated xml:
<testsuite errors="0" failures="0" name="pytest" skips="0" tests="1" time="0.006">
<properties>
<property name="ARCH" value="PPC"/>
<property name="STORAGE_TYPE" value="CEPH"/>
</properties>
<testcase classname="test_me.TestMe" file="test_me.py" line="16" name="test_foo" time="0.000243663787842"/>
</testsuite>
Warning
This is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept.
3.0
This option is rarely used and is scheduled for removal in 4.0.
An alternative for users which still need similar functionality is to use the pytest-tap plugin which provides a stream of test data.
If you have any concerns, please don't hesitate to open an issue.
To create plain-text machine-readable result files you can issue:
pytest --resultlog=path
and look at the content at the path
location. Such files are used e.g. by the PyPy-test web page to show test results over several revisions.
Creating a URL for each test failure:
pytest --pastebin=failed
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select tests as usual or add for example -x
if you only want to send one particular failure.
Creating a URL for a whole test session log:
pytest --pastebin=all
Currently only pasting to the http://bpaste.net service is implemented.
To disable loading specific plugins at invocation time, use the -p
option together with the prefix no:
.
Example: to disable loading the plugin doctest
, which is responsible for executing doctest tests from text files, invoke pytest like this:
pytest -p no:doctest
2.0
You can invoke pytest
from Python code directly:
pytest.main()
this acts as if you would call "pytest" from the command line. It will not raise SystemExit
but return the exitcode instead. You can pass in options and arguments:
pytest.main(['-x', 'mytestdir'])
You can specify additional plugins to pytest.main
:
# content of myinvoke.py
import pytest
class MyPlugin(object):
def pytest_sessionfinish(self):
print("*** test run reporting finishing")
pytest.main(["-qq"], plugins=[MyPlugin()])
Running it will show that MyPlugin
was added and its hook was invoked:
$ python myinvoke.py
. [100%]*** test run reporting finishing
Note
Calling pytest.main()
will result in importing your tests and any modules that they import. Due to the caching mechanism of python's import system, making subsequent calls to pytest.main()
from the same process will not reflect changes to those files between the calls. For this reason, making multiple calls to pytest.main()
from the same process (in order to re-run tests, for example) is not recommended.