Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixtures: register finalizers with all previous fixtures in the stack (take 3) #7511

Closed
wants to merge 13 commits into from

Conversation

SalmonMode
Copy link
Contributor

See #6551 and #6438 for context.

Copied from the original PR:

Turns out the root cause was because fixtures were only considering the fixtures requested in their argnames when trying to find out where to register their finalizers at execute time. They weren't considering autouse fixtures, so when this was combined with parameterization, this resulted in fixtures not being torn down when a parameterized, autouse fixture was being torn down to go from one param set to the next.

This also resulted in fixtures being torn down out of order in extreme cases. You can see an example of this in the test that I added (if you run it without this fix).

The solution ultimately required me to have fixtures register their finalizers with all fixtures that came before them in the stack. This list of fixtures came from using request.fixturenames in combination with the parent requests (when relevant) and the fixture's requested argnames to determine a good slicing point. Since this logic could be summed up on its own, I moved it into its own private method with a beefy docstring to explain it.

@bluetech
Copy link
Member

Hi @SalmonMode, this (re)fixes #6436, right?

Also, since following the context is a bit difficult, can you describe what happened to the first two takes, and what is different in this one?

@SalmonMode
Copy link
Contributor Author

Correct

Sure! In the first take, I solved an old bug, but, because I used sets for something (not realizing that order wasn't retained in a set), I introduced another bug, so it had to be reverted. In take 2, @blueyed remade my branch, but we had it use normal lists ones and we added a test to cover it. Unfortunately, @blueyed couldn't dedicate time to finishing this effort, so closed the take 2 PR. I've now remade it again for a third try.

" SETUP F f_fix['f2']\n"
" test_auto.py::TestFixtures::test_b[p1-m1-f2] {0}\n"
" TEARDOWN F f_fix['f2']\n"
" TEARDOWN C c_fix\n"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem right, why is c_fix (class-scoped, without dependencies, being requested by test_a and test_b directly), being torn down at this point?

@pytest.fixture(scope="class")
def c_fix():
    yield

class TestFixtures:

    def test_a(self, c_fix): pass
    def test_b(self, c_fix): pass

It should be torn down at the end of the session only, regardless of the involvement of other autouse fixtures, or am I missing something? 🤔

Copy link
Contributor Author

@SalmonMode SalmonMode Aug 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix basically makes it so that the fixture order is a stack, and a fixture cannot be torn down unless all the fixtures that were setup after it have also been torn down. In this case, I believe m_fix is trying to tear down to go from m1 to m2, but to do that, both c_fix and another_c_fix need to be torn down first.

This way, there's no surprises related to side effects.

Without this (if c_fix would otherwise tear down only at the end of the session), if c_fix was unwittingly affected by something m_fix did when its param was m1, then when m2 comes around, if c_fix wasn't torn down, it would still be influenced by m1, while m_fix kept doing doings that c_fix would no longer be affected by. That would mean that if we ran that class directly with pytest, test_auto.py::TestFixtures::test_a[p1-m2-f1] would potentially be doing something different than had we ran test_auto.py::TestFixtures::test_a[p1-m2-f1] directly.

For example, if m_fix was parameterized so that it was setting up different versioned installations of chrome and chromedriver, and c_fix was launching a chrome driver session (not that I recommend handling drivers this way haha), c_fix would be executed once and stay as the first driver session made which is running off the first driver/browser version installed. So both tests would be using the same driver session, while m_fix was reinstalling different versions that aren't used by anything. And if test_auto.py::TestFixtures::test_a[p1-m2-f1] was run directly, c_fix would be executed while a different version of chrome and chromedriver were installed. I'm guessing running the class with pytest (rather than the function directly) would actually throw an error since the driver executable would still be running when m_fix tried to uninstall it, but that's the general idea. This fix prevents that sort of thing from happening.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment nailed the change for me. I agree with your argument, that the new behavior is the correct one. This is indeed a subtle but significant change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the explanation @SalmonMode, and sorry about the delay, it helped understand the actual behavior behind the change!

Again, really appreciate the patience on this, given that this is taking months now to review, but is a very sensitive and complex part of the codebase.

The remark about it being now a stack helps a lot.

if c_fix was unwittingly affected by something m_fix did when its param was m1

I understand the picture you are painting here, the problem I see here is "unwittingly", which is a synonym for implicitly... if c_fix is affected by m_fix, then c_fix should explicitly depend on m_fix, I think.

When I see this code:

@pytest.fixture(scope="class")
def c_fix():
    yield

class TestFixtures:

    def test_a(self, c_fix): pass
    def test_b(self, c_fix): pass

In my understanding on how things should work, c_fix should only be torn down after all tests in TestFixtures have finished, regardless of what other autouse fixtures are in effect in this session, because c_fix doesn't depend on anything else.

If c_fix has an implicit dependency on m_fix, it should explicitly declare that dependency.

The same argument can also be made that this is a bug: if c_fix does something costly but completely unrelated to any other fixture, why is it being torn down just because m_fix needs to be recreated with a different argument? I understand the new behavior is safer, but also I believe someone might point out this is a regression for the use case above.

A stripped down example:

import pytest

@pytest.fixture(scope="session", params=[1, 2], autouse=True)
def s():
    pass


@pytest.fixture(scope="package", autouse=True)
def p():
    pass

def test():
    pass

Given that p doesn't depend on anything, I expect it to be setup/torndown exactly once in this session, and this is the behavior on master. If I change p() to p(s) then it is correctly setup and torn down twice, to account for s[1] and s[2].

The change here is implicitly adding a dependency to lower scoped fixtures to all their upper scoped fixtures, which I'm not sure it is correct/desirable.

What do you folks think?

Also I need to mention the sheer quality of the PR is remarkable! The documentation improvements are an amazing addition by themselves. 😍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi folks,

Gentle ping here in case this got lost through the cracks. 👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also argue that fixtures are part of the tests, as they cannot exist without the tests.

That's a bit vague, but let me word that differently: autouse fixtures are automatically applied to functions, not to other fixtures. When I mentioned that I was talking specifically about autouse fixtures, which I belive is the main point we are discussing.

it demonstrates the sort of coherency issues that I'm thinking about (no offense intended here. I mean this in the literal sense).

Sorry, I don't see any inconsistency. 😕

In that example, pytest needs to execute test(). In the same scope we have two autouse fixtures, s and p, so the test is considered to have been written as if the user explicitly wrote test(s, p). Given that s is parametrized, pytest executes two tests, test(s[1], p) and test(s[2], p). Because p doesn't depend on anything else, there's no reason for pytest to recreate p.

This example implies that p is dependent on s[1].

I disagree... p doesn't really depend on anything. Just because a fixture was created before it, doesn't imply a dependency IMHO.

Thanks for the examples, but all the output shown after them (with setup/teardown order of the fixtures involved) makes sense to me, so it is hard for me to understand what is the argument behind them, sorry.


I believe the main point of disagreement between us is that you believe that fixtures should explicitly depend on any autouse fixture acting on it, and I don't think they should.

To exemplify:

import pytest

@pytest.fixture(scope="session", params=[1, 2], autouse=True)
def s():
    pass

@pytest.fixture(scope="module")
def m():
    pass

def test(m):
    pass    

What happens today, is that pytest executes as if the code was written like this:

import pytest

@pytest.fixture(scope="session", params=[1, 2])
def s():
    pass

@pytest.fixture(scope="module")
def m():
    pass

def test(s, m):  # <--- s added because s is autouse
    pass    

What you argue is that pytest should be doing this:

import pytest

@pytest.fixture(scope="session", params=[1, 2])
def s():
    pass

@pytest.fixture(scope="module")
def m(s):  # <--- s added because s is autouse
    pass

def test(s, m):  # <--- s added because s is autouse
    pass    

Is my undertanding correct? Please correct me if I'm wrong.

If my understanding is right, then I argue that the current behavior is correct, and it will definitely break existing suites that are working, and working not by accident, but by design (I say that from our codebase at work).

It is common for an autouse fixture to execute something which is system-wide, but not necessarily affects other fixtures in the system.

I will give an example derived from real code from work:

@pytest.fixture(scope="session", params=["br", "en"], autouse=True)
def set_system_locale(request):
    # sets current system locale to request.param
    ...


# in another module
@pytest.fixture(scope="package", autouse=True)
def initialize_petsc():
    # initializes petsc C++ runtime
    ...

In the current behavior, set_system_locale doesn't affect initialize_petsc (and it should not, petsc doesn't care about the locale at all). We tests with set_system_locale[br] and set_system_locale[en], and initialize_petsc is executed once. Under your proposal, initialize_pestc would execute twice, which is wrong and probably would crash the system.

I believe that if people aren't consciously intending to depend on this (and I would guess this the vast majority of those who are depending on it), then this change will be a good thing for them, as it will expose these issues in their test suites (like with what I saw in pytest-django) so they can be addressed.

I'm not familiar with the issue with pytest-django you mention... perhaps you can exemplify it to clarify the point better?

Again sorry for all the discussion; I'm not trying to be difficult, I'm really making an effort to make sure I'm not missing the point, but if I'm understanding the proposal correctly, I don't think it is correct.

Perhaps a new example of how things work now, and how you belive they should work, but with "real" examples, might help me understand what you believe the problem is (in case I'm still missing your point, as I tried to exemplify above)? It might be I'm missing something due to the fact that we are using abstract examples (like s, p and so on)? In that case the pytest-django case might help.

Again thanks for the patience!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I'm not saying that the autouse fixture has to be dependent on the higher level fixture. I'm saying that if there's a need for (in the example), s[1] to happen before p, then p must be dependent on s[1], but if it isn't, then it can happen after p, and if it can happen after p, then it can be restructured slightly so that s happens after p and each param for s could apply to a separate version of that test without p having to be run again. This is where my second example output came in, i.e.:

SETUP    S p
    SETUP    M s[1]
        test-auto-fix.py::test[1] (fixtures used: p, s).
    TEARDOWN M s[1]
    SETUP    M s[2]
        test-auto-fix.py::test[2] (fixtures used: p, s).
    TEARDOWN M s[2]
TEARDOWN S p

The goal with my change is to get fixtures working like a stack so it's easier to manage/debug fixtures. I'm not saying there is always a dependent implied, but if the setup/teardown plan is jumping around, that suggests to me that things are being tested in contexts that have more going on than is necessary, or because scopes are being leveraged in a certain way to control ordering of behavior (and I believe that way could be made more effective/convenient/easy to read).

In regards to your work example, things get tricky, because I don't know where that first fixture is defined. If it's in your top-level contest, for example, then I would expect that second fixture to run twice, and if that's not desired, then I wouldn't have that first fixture in the top level conftest.

That said, I've been thinking about this for a while because a session level fixture isn't really a session level fixture depending on where it's defined, and I think we could solve a _lot_of problems/confusion by leaning into that behavior, and having scopes work like relative scopes when they can't really be applying to the full scope the claim to be. For example, if I define a fixture inside a class, and give it a scope of session, it can't really be anything more than class, but I could see it being used that way to make sure it happens before class scoped fixtures for tests in that class. In line with this, in the setup plan output, these could be indented according to the scope their effectively limited to in order to make a cleaner, easier to follow setup plan.

Regarding the pytest-django stuff, this is the PR I made to solve it in response to the merging if the original take of this PR:

pytest-dev/pytest-django#807

I'm on mobile, so hopefully the description there explains it well.

All that said, I find myself losing the forest for the trees a bit, so I'm definitely gonna have to sleep on this, and come back with some fresh eyes and re-read everything haha

Copy link
Contributor Author

@SalmonMode SalmonMode Nov 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so I took a look back, and read through things a bit more carefully (although I still need a bit more caffeine haha). That said, I did spot an issue with how things currently work in regards to your work example, and this bit in the docs:

autouse fixtures obey the scope= keyword-argument: if an autouse fixture has scope='session' it will only be run once, no matter where it is defined. scope='class' means it will be run once per class, etc.

Given these fixtures:

@pytest.fixture(scope='session', autouse=True, params=[1, 2])
def s():
    pass

@pytest.fixture(scope='class', autouse=True)
def c():
    pass

One would expect c to only happen once and only once per class because it's autouse. The problem comes in when parameterization is involved (at least for a higher scope fixture).

Given these test classes, though:

class TestX:
    def test_it(self):
        pass

class TestY:
    def test_it_again(self):
        pass

Because pytest has to execute every test once for the first param set of s before it can do it again for the second param set, that means it'll have to setup c twice for each class because pytest had to tear it down after each class for the first param set and those tests are still dependent on c because of autouse. In total, c would be executed 4 times.

Here's the setup plan (from pytest 6.1.2)
SETUP    S s[1]
      SETUP    C c
        test2.py::TestX::test_it[1] (fixtures used: c, s)
      TEARDOWN C c
test2.py::TestY::test_it_again[1]
      SETUP    C c
        test2.py::TestY::test_it_again[1] (fixtures used: c, s)
      TEARDOWN C c
test2.py::TestX::test_it[2]
TEARDOWN S s[1]
SETUP    S s[2]
      SETUP    C c
        test2.py::TestX::test_it[2] (fixtures used: c, s)
      TEARDOWN C c
test2.py::TestY::test_it_again[2]
      SETUP    C c
        test2.py::TestY::test_it_again[2] (fixtures used: c, s)
      TEARDOWN C c
TEARDOWN S s[2]

This means that your package scoped fixture for work could easily be run more than once per package if more than one package depends on it.

Unfortunately, because of parameterization, there's no way to guarantee an autouse fixture only happens once per its defined scope.

NOW, I have an alternate proposal that I think might suit everyone's needs.

In @bluetech's example:

import pytest

@pytest.fixture(scope="module")
def m():
    print('M SETUP')
    yield
    print('M TEARDOWN')

class TestIt:
    @classmethod
    @pytest.fixture(scope="class", autouse=True)
    def c(cls):
        print('C SETUP')
        yield
        print('C TEARDOWN')

    def test_f1(self):
        print('F1 CALL')

    def test_f2(self, m):
        print('F2 CALL')
setup plan
      SETUP    C c
        test3.py::TestIt::test_f1 (fixtures used: c)
    SETUP    M m
        test3.py::TestIt::test_f2 (fixtures used: c, m)
      TEARDOWN C c
    TEARDOWN M m

the m fixture, to me, happens out of place, because it doesn't respect the ordering of scopes, until teardown. This is also a problem for your scenario, where you definitely only want a fixture running once ever.

Ideally, I would like this to be the setup plan for that example:

    SETUP    M m
      SETUP    C c
        test3.py::TestIt::test_f1 (fixtures used: c)
        test3.py::TestIt::test_f2 (fixtures used: c, m)
      TEARDOWN C c
    TEARDOWN M m

But what if only test_f1 is to be run? Then I would want this to be the setup plan:

      SETUP    C c
        test3.py::TestIt::test_f1 (fixtures used: c)
      TEARDOWN C c

Since the m fixture wasn't needed by anything, it could be cut out.

To me, this is much easier to not just reason about, but also to orchestrate.

This has pretty large implications, and could be incredibly challenging to figure out how to do this not just effectively, but in a way that folks can leverage in a predictable way for more advanced usage. But, I have an idea, and I think you'll get quite a kick out of it.

Fixture Resolution Order (FRO)

It works almost exactly like MRO (so it's a familiar concept), except the "child" we're trying to figure out the FRO of, is the master FRO. After the first pass, it can be sorted by scope, and this can be used to ensure stack-like behavior.

Keep in mind, that entries in the FRO aren't required to be in the stack. It'd be more of a reference for knowing what fixtures should be executed after others, and how to teardown to appropriate common points. It can also look at the tests that would actually run (so not just the ones that were collected) and considered their actual dependencies to find out the actual scopes that running certain fixtures would be necessary for, even if the first test in those scopes isn't actually dependent on them, so they can be preemptively run for those scopes while also not running for scopes that don't need them.

We can also apply some tweaks to it to make things operate more efficiently. For example, if autouse fixtures are shifted left in the FRO, then parameterized fixtures can be shifted as far right as can be (so as to affect as few other fixtures as possible), and if they're autouse parameterized fixtures, they can be shifted as right as possible among the autouse fixtures of their scope.

For your use case, since you only want initialize_petsc to run once ever, you would just need to change its scope to session, and since set_system_locale is parameterized, it would happen after initialize_petsc.

Nothing would ever get executed that didn't need to be, parameterized fixtures would have as reduced a footprint as they reasonably could, stack-like behavior would be achieved, reasoning about and planning setup plans would be easier (IMO), and I'm not sure, but I think this could also offer some opportunities for optimization in pytest's code.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I'm not saying that the autouse fixture has to be dependent on the higher level fixture. I'm saying that if there's a need for (in the example), s[1] to happen before p, then p must be dependent on s[1], but if it isn't, then it can happen after p, and if it can happen after p, then it can be restructured slightly so that s happens after p and each param for s could apply to a separate version of that test without p having to be run again.

IIUC what you mean, then I agree and it is what I've been trying to convey: if a certain fixture requires the work from another fixture to happen before it, then it should explicitly declare a dependency on it. If p needs s in some way (due to a side effect like DB initialization, or because it needs the actual value produced by s) then p should depend on s (IOW, declare s as a paramater).

I took a look at pytest-dev/pytest-django#807 (thanks for the link), and it seems what happened there was exactly that: the django_db_blocker implicitly depended on the database (meaning: it needed the database to be setup at that point, but did not received any other fixture as parameter), which broke once a bug in pytest was fixed. The solution to the problem was to explicitly make django_db_blocker depend on django_db_setup, which is also what I advocate here is correct and how things should work.

After reading back a few messages, I think an interpretation (which is also yours) is a bit incorrect and leads to some conclusions which are not true:

  • The scope of a fixture dictates when it is "active", our that it should execute during that scope. So a session-scoped fixture should be active during the entire session, and a class-scoped fixture will be active while the tests of a class execute.

When in fact what it means in pytest (currently implemented) is:

  • The fixture scope dictates when a fixture will be torn down. A fixture will be active only when requested, and it will be teardown based on its scope: a session scope fixture will be executed when first requested, and torn down at the end of the session. A class-scoped fixture will be executed when first requested, and torn down after the last test of the class finishes.

"be requested" here also imply being requested by other fixtures, not only tests.

One would expect c to only happen once and only once per class because it's autouse. The problem comes in when parameterization is involved (at least for a higher scope fixture).
Unfortunately, because of parameterization, there's no way to guarantee an autouse fixture only happens once per its defined scope.

You are right; once parametrization plays in, the weak guarantee that a fixture at a high-level scope is executed once per scope breaks.

the m fixture, to me, happens out of place, because it doesn't respect the ordering of scopes, until teardown.

Indeed, it follows the rule I described above... but given m and c don't depend on each other, I actually don't think it is a problem, because m apparently doesn't care about which order it will be torn down related to c. A bit weird I agree, but I don't think m should be setup before test_f2, as it is only requested then.

Fixture Resolution Order (FRO)
Nothing would ever get executed that didn't need to be, parameterized fixtures would have as reduced a footprint as they reasonably could, stack-like behavior would be achieved, reasoning about and planning setup plans would be easier (IMO), and I'm not sure, but I think this could also offer some opportunities for optimization in pytest's code.

Yeah I think you are onto something, indeed a clear view or different approach on how pytest deals with fixtures is something we have been wanting to tackle in a while: #4871.

Currently the fixture mechanism is implemented as:

  1. Lazy: fixtures are created only when first requested by a test or other fixture.
  2. Implicit finalizers: the finalization stack is implicit, we don't have an eagle's eye view of which fixtures depend on which fixtures; it all happens by a series of finalizer callbacks being registered on each other to ensure things are torn down in the proper order.

There's definitely room for improvement there, however I do think we should keep 1) working as is.

Perhaps we should move the discussion there and close this PR, or do you think we should analyse this some more? Also I didn't hear from others in while here (understandable, keeping up with the discussion here takes a lot of time!).

Again, thanks for the rich discussion and patience here @SalmonMode!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha yes, I agree. Getting here using the GitHub mobile app is also pricing challenging (I wanted to give it a try, but it definitely needs some work).

And I agree. Given the nuances of this, the needs of pytest itself, its needs going forward, and backwards compatibility, I don't believe this PR is a sufficient fix. I've also started trying to implement the FRO approach to see how it could be done, and I can tell there's a lot of things that will need to be discussed. I'll move my proposal over to #4871, and this can be closed.

@nicoddemus
Copy link
Member

(Sorry for the delay, this as you know is a sensitive PR so it requires me to reserve a good chunk of free time to evaluate it)

@RonnyPfannschmidt
Copy link
Member

Just as FYI, for me it may take until mid October before I can commit the time for a deep review of this for myself, so feel free to get it done without me

@SalmonMode
Copy link
Contributor Author

No worries!

Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice fix and great comments @SalmonMode!

I still need to dig into the code more to understand the actual code changes better, but having read the diff, the comments and the added tests, and played with it a little, I think I understand the idea and I think this is a good change which we should get in.

Possibly this is a major breaking change which would need to wait for pytest 7, but that's not on the horizon so hopefully it will be fine in a minor release.

I left some minor code comments, but as I said it's still not a real review.

changelog/6436.bugfix.rst Outdated Show resolved Hide resolved
src/_pytest/fixtures.py Outdated Show resolved Hide resolved
src/_pytest/fixtures.py Outdated Show resolved Hide resolved
another ``:class:FixtureDef`` that the latter ``:class:FixtureDef`` is
responsible for tearing down this ``:class:FixtureDef``.
"""
for finalizer in getattr(fixturedef, "_finalizers", ()):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this using getattr instead of direct attribute access because of PsuedoFixtureDef? If so, it would be better to add _finalizers to PsuedoFixtureDef.

Copy link
Contributor Author

@SalmonMode SalmonMode Nov 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that was the reason. But I checked it out and it doesn't look like a PseudoFixtureDef could even get there, so I'm now just tapped into the attribute directly.

src/_pytest/fixtures.py Outdated Show resolved Hide resolved
testing/python/fixtures.py Outdated Show resolved Hide resolved
testing/python/fixtures.py Outdated Show resolved Hide resolved
testing/python/fixtures.py Outdated Show resolved Hide resolved
test_fixtures_used = (
"(fixtures used: another_c_fix, c_fix, f_fix, m_fix, p_fix, request, s_fix)"
)
expected = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend using """ strings with textwrap.dedent here, then the \ns can be removed.

" SETUP F f_fix['f2']\n"
" test_auto.py::TestFixtures::test_b[p1-m1-f2] {0}\n"
" TEARDOWN F f_fix['f2']\n"
" TEARDOWN C c_fix\n"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment nailed the change for me. I agree with your argument, that the new behavior is the correct one. This is indeed a subtle but significant change.

SalmonMode and others added 11 commits November 6, 2020 11:14
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
Co-authored-by: Ran Benita <ran@unusedvar.com>
@SalmonMode
Copy link
Contributor Author

Given our discussion here, it's clear that, while there is an issue here, a solution for this problem is going to take a more nuanced solution so that it doesn't violate current expectations of how fixtures are intended to work. This change, as it stands, may solve the issue it presents, but would also potentially cause other issues that could break existing test suites that depend on those expectations.

As such, I will be closing this PR, and the discussion will be continued over at #4871.

@SalmonMode SalmonMode closed this Nov 14, 2020
@nicoddemus
Copy link
Member

Graet, thanks @SalmonMode!

@RonnyPfannschmidt
Copy link
Member

@SalmonMode I want to thank you for going not only to great lengths to understand the issues and working out potential solutions, but also having the fortitude to put it down when the devil in the details strikes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants