Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--fixtures-per-test: Exclude parametrizing pseudo fixtures? #11295

Open
The-Compiler opened this issue Aug 7, 2023 · 6 comments · May be fixed by #12129
Open

--fixtures-per-test: Exclude parametrizing pseudo fixtures? #11295

The-Compiler opened this issue Aug 7, 2023 · 6 comments · May be fixed by #12129
Labels
topic: fixtures anything involving fixtures directly or indirectly topic: reporting related to terminal output and user-facing messages and errors

Comments

@The-Compiler
Copy link
Member

With something like:

import pytest

def test_one(monkeypatch):
    pass

@pytest.mark.parametrize("x", [1, 2])
def test_two(x):
    pass

running pytest --fixtures-per-test results in:

-------------------------- fixtures used by test_one ---------------------------
----------------------------------- (x.py:4) -----------------------------------
monkeypatch -- .../_pytest/monkeypatch.py:30
    A convenient fixture for monkey-patching.

------------------------- fixtures used by test_two[1] -------------------------
----------------------------------- (x.py:7) -----------------------------------
x -- .../_pytest/fixtures.py:352
    no docstring available

------------------------- fixtures used by test_two[2] -------------------------
----------------------------------- (x.py:7) -----------------------------------
x -- .../_pytest/fixtures.py:352
    no docstring available

IMHO, the output for x should be excluded here - while it is technically a (pseudo) fixture internally, showing it like that is rather confusing for users. It doesn't make sense to show it as a fixture without a docstring which is "defined" here:

def get_direct_param_fixture_func(request: "FixtureRequest") -> Any:
return request.param

@The-Compiler The-Compiler added topic: reporting related to terminal output and user-facing messages and errors topic: fixtures anything involving fixtures directly or indirectly labels Aug 7, 2023
@WarrenTheRabbit
Copy link
Contributor

You think that

-------------------------- fixtures used by test_one ---------------------------
----------------------------------- (x.py:4) -----------------------------------
monkeypatch -- .../_pytest/monkeypatch.py:30
    A convenient fixture for monkey-patching.

is clearer to the user?

I am a new user (and a new developer) and I certainly find that clearer.

The expectation I have for a fixture is that

  1. its author has the option of including a docstring that explains its purpose
  2. fixtures written by the pytest devs always have docstrings
  3. fixtures are created using the pytest.fixture() decorator

If I see a fixture without a docstring, I'd assume I had created it somewhere. Which I would have. But not with the pytest.fixture() decorator.

For me, it would be clearer if either

  1. the output for x is removed from --fixtures-per-test's output; or
  2. the output for x is kept but a docstring tells me it has been generated by pytest to pass an element from the pytest.mark.paramaterize()'s parameter list to the test function

@The-Compiler
Copy link
Member Author

Yeah, IMHO those pseudo-fixtures just shouldn't show up there at all.

@WarrenTheRabbit
Copy link
Contributor

I'm eager to start contributing. But am still a novice programmer.
The question of my capability aside, I'm reading through the contribution guide right now to understand what processes of governance/discussion/consensus/etc must turn over their gears before a change of that kind could be officially proposed/developed on.

@The-Compiler
Copy link
Member Author

The-Compiler commented Aug 20, 2023

For a smaller change like this, an issue like we have here is a good place for this discussion to take place. Given that there hasn't been any push-back, I'd suggest you go ahead with a pull request! I'm not too familiar with the fixture code myself, but based on gut feeling, this sounds like a good first issue to tackle. Happy to help if you get stock somewhere!

@WarrenTheRabbit
Copy link
Contributor

Amazing. =)
I have things on for the next few days but when I get the chance I'll go through the steps in the Long version of Preparing Pull Requests. Then, once I've implemented a change, I'll do step 11 (submit the pull request through GitHub) with something like:

head-fork: WarrenTheRabbit/pytest
compare: exclude-parameterising-fixtures-from-fixtures-per-test-output

base-fork: pytest-dev/pytest
base: main

@WarrenTheRabbit
Copy link
Contributor

WarrenTheRabbit commented Aug 30, 2023

It has been over a week since I last commented on this issue so I thought I would share an update as I've made no Pull Request either:

  • I fully intend to tackle this issue
  • I have started making 'case study' notes that will provide material for a documentation proposal

I first mention the idea of doing a case study in I want to encourage beginners to contribute by making refactoring pull requests:

I think outreach and documentation of all stripes could achieve this. For example, one documentation idea is Case Study documents. Case Study documents would give an example of how a beginner (me) learned how to contribute to pytest. An example of that could be: documenting the work I do on #11295.

WarrenTheRabbit added a commit to WarrenTheRabbit/pytest that referenced this issue Mar 17, 2024
Addresses issue pytest-dev#11295 by excluding from the --fixtures-per-test
output any 'pseudo fixture' that results from directly parametrizating
a test with ``@pytest.mark.parametrize``.

The justification for removing these fixtures from the report is that

a) They are unintuitive. Their appearance in the fixtures-per-test
report confuses new users because the fixtures created via
``@pytest.mark.parametrize`` do not confrom to the expectations
established in the documentation; namely, that fixtures are
	- richly reusable
	- provide setup/teardown features
	- created via the ``@pytest.fixture` decorator

b) They are an internal implementation detail. It is not the explicit
goal of the direct parametrization mark to create a fixture; instead,
pytest's internals leverages the fixture system to achieve the explicit
goal: a succinct batch execution syntax. Consequently, exposing the
fixtures that implement the batch execution behaviour reveal more
about pytest's internals than they do about the user's own design
choices and test dependencies.
WarrenTheRabbit added a commit to WarrenTheRabbit/pytest that referenced this issue Mar 17, 2024
Addresses issue pytest-dev#11295 by excluding from the --fixtures-per-test
output any 'pseudo fixture' that results from directly parametrizating
a test with ``@pytest.mark.parametrize``.

The justification for removing these fixtures from the report is that

a) They are unintuitive. Their appearance in the fixtures-per-test
report confuses new users because the fixtures created via
``@pytest.mark.parametrize`` do not confrom to the expectations
established in the documentation; namely, that fixtures are
	- richly reusable
	- provide setup/teardown features
	- created via the ``@pytest.fixture` decorator

b) They are an internal implementation detail. It is not the explicit
goal of the direct parametrization mark to create a fixture; instead,
pytest's internals leverages the fixture system to achieve the explicit
goal: a succinct batch execution syntax. Consequently, exposing the
fixtures that implement the batch execution behaviour reveal more
about pytest's internals than they do about the user's own design
choices and test dependencies.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: fixtures anything involving fixtures directly or indirectly topic: reporting related to terminal output and user-facing messages and errors
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants