Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different tests were collected between gw0 and gw1 #432

Open
Alex-Chizhov opened this issue May 8, 2019 · 13 comments
Open

Different tests were collected between gw0 and gw1 #432

Alex-Chizhov opened this issue May 8, 2019 · 13 comments

Comments

@Alex-Chizhov
Copy link

I have a simple test with parametrization:

@pytest.mark.parametrize('product', testdata, ids=[repr(i) for i in testdata])
def test_add_new_product(appf_admin, product):
    with allure.step(f"Add product {product} in admin panel"):
        appf_admin.admin_panel.add_new_product(product)
    with allure.step(f"Get count of products {product} from search in admin panel"):
        name = re.match(r'name=(\S+)', str(product))
        clean_name = name.group(1)
        appf_admin.admin_panel.find_product_in_catalog_by_name(clean_name)
        products = appf_admin.admin_panel.get_count_product_row_from_catalog()
    with allure.step(f"Checking that searching product {product} return minimum 1 row with result"):
        assert products > 0

And simple generator of test data for parametrezation:

def random_string(maxlen):
    symbol = string.ascii_letters + string.digits
    return ''.join([random.choice(symbol) for i in range(random.randrange(1,maxlen))])

def random_digits(maxlen):
    symbol = string.digits
    return ''.join([random.choice(symbol) for i in range(random.randrange(1,maxlen))])

testdata_raw = [
Product(name=random_string(10), short_description=random_string(10), description=random_string(100),
        USD=random_digits(3)) for i in range(4)]
testdata = sorted(testdata_raw, key=lambda obj: obj.name)

When i launch test with additional argument:
-n 2
I get this error message:

============================= test session starts =============================
platform win32 -- Python 3.7.1, pytest-4.4.1, py-1.7.0, pluggy-0.9.0
rootdir: C: ...\Tests
plugins: xdist-1.28.0, parallel-0.0.9, forked-1.0.2, allure-pytest-2.6.0
gw0 I / gw1 I
gw0 [4] / gw1 [4]

gw1:None (gw1)
Different tests were collected between gw0 and gw1. The difference is:
--- gw0

+++ gw1

@@ -1,4 +1,4 @@

-test_add_new_product_with_param.py::test_add_new_product[name=LgysR6Y | USD=31 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=hIx | USD=7 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=lbpoI | USD=56 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=pE | USD=51 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=0fUz | USD=39 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=b0heCg | USD=16 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=sD | USD=8 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=uHSt | USD=58 | id=None]

=================================== ERRORS ====================================
____________________________ ERROR collecting gw1 _____________________________
Different tests were collected between gw0 and gw1. The difference is:
--- gw0

+++ gw1

@@ -1,4 +1,4 @@

-test_add_new_product_with_param.py::test_add_new_product[name=LgysR6Y | USD=31 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=hIx | USD=7 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=lbpoI | USD=56 | id=None]
-test_add_new_product_with_param.py::test_add_new_product[name=pE | USD=51 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=0fUz | USD=39 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=b0heCg | USD=16 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=sD | USD=8 | id=None]
+test_add_new_product_with_param.py::test_add_new_product[name=uHSt | USD=58 | id=None]
=========================== 1 error in 1.09 seconds ===========================
Process finished with exit code 0

Tell me please what could be the problem

@Alex-Chizhov
Copy link
Author

Alex-Chizhov commented May 8, 2019

As I understand it, xdist started test 2 times, and the test data generator also worked 2 times. Xdist doesn't like it. And we have to use a static JSON file with test data. We need to make the generation of this data in the fixture, not in the body of the test.

@Strilanc
Copy link

Strilanc commented Jun 26, 2019

I am also running into this issue when using tests that do fuzzing. It would be very useful if there was a way to disable the "generating tests twice must produce identical tests" check. For example, given this test code in a file example.py:

@pytest.mark.parametrize('x,y', [
    (random.random(), random.random())
])
def test_example(x, y):
    assert isinstance(x, float)
    assert isinstance(y, float)

This command passes:

pytest example.py

And this command fails:

pytest example.py -n 2

With an error similar to this one:

Different tests were collected between gw0 and gw1. The difference is:
--- gw0

+++ gw1

@@ -1 +1 @@

-example.py::test_example[0.16380351559829032-0.38206603085139057]
+example.py::test_example[0.2613173472636646-0.7205939052389861]

Oddly, with one parameter instead of two the test will pass even with -n 2.

@nicoddemus
Copy link
Member

Hi,

Sorry for the delay!

As I understand it, xdist started test 2 times, and the test data generator also worked 2 times. Xdist doesn't like it. And we have to use a static JSON file with test data. We need to make the generation of this data in the fixture, not in the body of the test.

Yes, each worker performs a standard collection, and sends the collected test ids (in order) back to the master node. The master node ensures every worker collected the same number of tests and in the same order, because the scheduler will from that point on send just the test indexes (not the entire node id) to each worker to tell them which test to execute. That's why the collection must be the same across all workers.

"generating tests twice must produce identical tests" check

As explained above, unfortunately it is not a simple check, but a central design point how the scheduler and workers interact.

@josiahls
Copy link

josiahls commented Aug 9, 2019

Hi, what is the solution to this? I also have:

@pytest.mark.parametrize("env", Envs.get_all_latest_envs())
def test_envs_all(env):
    gym.make(env)

where I am trying to test an agent on a bunch of environments. Different tests were collected between gw0 and gw1. Envs.get_all_latest_envs() just returns a list of env names. This works in the normal pytest execution, however some environments take forever to init, so I'd like to run others in parallel instead of waiting.

I'm getting a similar error.

plugins: xdist-1.29.0, forked-1.0.2, asyncio-0.10.0
gw0 I / gw1 I
gw0 [483] / gw1 [483]

gw1:None (gw1)
The difference is:
--- gw0

+++ gw1

@@ -1,483 +1,483 @@

+fast_rl/tests/test_Envs.py::test_envs_all[AmidarNoFrameskip-v40]
+fast_rl/tests/test_Envs.py::test_envs_all[AirRaidNoFrameskip-v40]
+fast_rl/tests/test_Envs.py::test_envs_all[maze-sample-5x5-v0]
+fast_rl/tests/test_Envs.py::test_envs_all[VideoPinballNoFrameskip-v40]
+fast_rl/tests/test_Envs.py::test_envs_all[AsteroidsNoFrameskip-v40]

@aganders3
Copy link

@josiahls Try using sorted(Env.get_all_latest_envs()). I had the same problem and the issue was each worker generating the "same" list but in a different order.

@therefromhere
Copy link

therefromhere commented Oct 2, 2019

edit, moved my comment to a new issue #472

@federicosacerdoti
Copy link

Had this issue, fixed with sorting the parameters

@pytest.mark.parametrize("foo", sorted(foos))

treverhines added a commit to treverhines/scipy that referenced this issue Apr 5, 2021
…described

here: pytest-dev/pytest-xdist#432

DOC: fixed typo in comment, made abbreviations for n-dimensional consistent
@pp-mo
Copy link

pp-mo commented Oct 26, 2022

I have also been having this issue with parametrized tests,
I found discussions here, which sounded promising, but for some reason sorting the params does not fix it.

Possibly relevant :

  1. we are using multiple params, so 2-3 "dimensions" of parametrization
  2. we are using fixtures to encapsulate the parametrisations (as it's much neater to share between tests that way)

Relevant to this PR at present .. SciTools/iris#4960
A sample failing actions job

Main package versions Using :

pytest                    7.1.3            py38h578d9bd_0    conda-forge
pytest-forked             1.4.0              pyhd8ed1ab_0    conda-forge
pytest-xdist              2.5.0              pyhd8ed1ab_0    conda-forge

Basically I'm stumped -- I'm not clear why sorting didn't seem to solve it, possibly something else is going on.
We could really use a hand with this, it's a serious sticking point -- we need xdist for the reduced CI testing time.

@pp-mo
Copy link

pp-mo commented Oct 26, 2022

for some reason sorting the params does not fix it.

UPDATE: in addition to sorted parameters, tried numbering the tests (which are in a common class), so that their alphabetic order is now the same as the order of definitions in the class. And that just worked 😮

@wanghuibin0
Copy link

also stumbled over this issue.
This problem is still there.
Is there a workaround?

@wanghuibin0
Copy link

also stumbled over this issue. This problem is still there. Is there a workaround?

Ah, I have just installed pytest-randomly, and it works.

@ChillarAnand
Copy link

also stumbled over this issue. This problem is still there. Is there a workaround?

Ah, I have just installed pytest-randomly, and it works.

Stumbled on this issue and with pytest-randomly installed, it works fine.

@RonnyPfannschmidt
Copy link
Member

@nicoddemus i believe the only action we can take is to document that if people use random objects as test id's
its their responsibility to ensure consistent test ids, and pytest-randomly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests