Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebuild for PyPy3.8 and PyPy3.9 #263

Conversation

regro-cf-autotick-bot
Copy link
Contributor

This PR has been triggered in an effort to update pypy38.

Notes and instructions for merging this PR:

  1. Please merge the PR only after the tests have passed.
  2. Feel free to push to the bot's branch to update this PR if needed.

Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.

This package has the following downstream children:

  • gnuradio
    and potentially more.

If this PR was opened in error or needs to be updated please add the bot-rerun label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase @conda-forge-admin, please rerun bot in a PR comment to have the conda-forge-admin add it for you.

This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/autotick-bot/actions/runs/2088171368, please use this URL for debugging.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I was trying to look for recipes to lint for you, but it appears we have a merge conflict.
Please try to merge or rebase with the base branch to resolve this conflict.

Please ping the 'conda-forge/core' team (using the @ notation in a comment) if you believe this is a bug.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@isuruf
Copy link
Member

isuruf commented Apr 4, 2022

@mattip, looks like there are 4 failures.

@mattip
Copy link

mattip commented Apr 4, 2022

These seem to be the same failures as in numpy/numpy#21285. Apparently there is an edge case in generic_aliases that we missed in testing before the release of PyPY3.9. I should have turned on full tests in https://github.com/pypy/binary-testing/actions/workflows/numpy.yml. I am looking into it.

@h-vetinari h-vetinari mentioned this pull request Apr 4, 2022
@h-vetinari
Copy link
Member

h-vetinari commented Apr 4, 2022

Just a note that backporting the migrator to numpy119 seems to be passing in #264

@mattip
Copy link

mattip commented Apr 4, 2022

@mattip
Copy link

mattip commented Apr 5, 2022

The PyPy issue has been fixed, I will update the PyPy feedstock. Once that percolates through we can restart CI here.

@h-vetinari
Copy link
Member

There seems to be some additional issues concerning numpy/numpy#17582 (which @mattip was involved with as well 🙃)... I had been wondering why pypy seemed to launch into the test suite twice, but it sounds like that is fully intentional.

However, it makes the failures quite a bit harder to read, because now there are two separate test suite outputs (when failing).

Here's some excerpts:

+ python -c 'import numpy, sys; sys.exit(not numpy.test(verbose=1, label='\''full'\'', tests=None, extra_argv=['\''-k'\'', '\''not (_not_a_real_test or test_partial_iteration_cleanup)'\'', '\''-nauto'\'', '\''--timeout=600'\'', '\''--durations=50'\'']))'
NumPy version 1.22.3
NumPy relaxed strides checking option: True
NumPy CPU features:  SSE SSE2 SSE3 SSSE3* SSE41* POPCNT* SSE42* AVX* F16C* FMA3? AVX2? AVX512F? AVX512CD? AVX512_KNL? AVX512_SKX? AVX512_CLX? AVX512_CNL? AVX512_ICL?
bringing up nodes...
bringing up nodes...

........................................................................ [  0%]

[...]

uh-oh, unmatched shift_free(ptr, 1) but allocated 8
........................................................................ [ 91%]
uh-oh, unmatched shift_free(ptr, 1) but allocated 8
uh-oh, unmatched shift_free(ptr, 1) but allocated 32768
..........................................................s............. [ 91%]
uh-oh, unmatched shift_free(ptr, 1) but allocated 8

[...]

...........................................................sssssssssssss [ 99%]
sssss                                                                    [100%]
=================================== FAILURES ===================================

[...]

_______________________________ test_new_policy ________________________________
[gw0] darwin -- Python 3.9.10 $PREFIX/bin/python

[...]

----------------------------- Captured stdout call -----------------------------  # <- complete output from inner test suite
NumPy version 1.22.3
NumPy relaxed strides checking option: True
NumPy CPU features:  SSE SSE2 SSE3 SSSE3* SSE41* POPCNT* SSE42* AVX* F16C* FMA3? AVX2? AVX512F? AVX512CD? AVX512_KNL? AVX512_SKX? AVX512_CLX? AVX512_CNL? AVX512_ICL?
============================= test session starts ==============================
platform darwin -- Python 3.9.10[pypy-7.3.8-final], pytest-7.1.1, pluggy-1.0.0 -- $PREFIX/bin/python
cachedir: .pytest_cache
hypothesis profile 'np.test() profile' -> database=None, deadline=None, derandomize=True, suppress_health_check=[HealthCheck.data_too_large, HealthCheck.filter_too_much, HealthCheck.too_slow, HealthCheck.return_value, HealthCheck.large_base_example, HealthCheck.not_a_test_method, HealthCheck.function_scoped_fixture]
rootdir: $SRC_DIR
plugins: hypothesis-6.41.0, xdist-2.5.0, forked-1.4.0, timeout-2.1.0
collecting ... collected 7693 items

tests/test__exceptions.py::TestArrayMemoryError::test_pickling <- ../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/site-packages/numpy/core/tests/test__exceptions.py PASSED [  0%]
tests/test__exceptions.py::TestArrayMemoryError::test_str <- ../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/site-packages/numpy/core/tests/test__exceptions.py PASSED [  0%]

[...]

tests/test_unicode.py::TestByteorder_1009_UCS4::test_values_updowncast <- ../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/site-packages/numpy/core/tests/test_unicode.py PASSED [100%]

=================================== FAILURES ===================================
______________________ test_may_share_memory_harder_fuzz _______________________

[...]

~~~~~~~~~~~~~~~~~~~~~ Stack of <unknown> (123145388929024) ~~~~~~~~~~~~~~~~~~~~~
  File "$PREFIX/lib/pypy3.9/site-packages/execnet/gateway_base.py", line 285, in _perform_spawn
    reply.run()
  File "$PREFIX/lib/pypy3.9/site-packages/execnet/gateway_base.py", line 220, in run
    self._result = func(*args, **kwargs)
  File "$PREFIX/lib/pypy3.9/site-packages/execnet/gateway_base.py", line 967, in _thread_receiver
    msg = Message.from_io(io)
  File "$PREFIX/lib/pypy3.9/site-packages/execnet/gateway_base.py", line 432, in from_io
    header = io.read(9)  # type 1, channel 4, payload 4
  File "$PREFIX/lib/pypy3.9/site-packages/execnet/gateway_base.py", line 400, in read
    data = self._read(numbytes - len(buf))
=========================== short test summary info ============================
FAILED tests/test_mem_overlap.py::test_may_share_memory_harder_fuzz - Failed:...
= 1 failed, 6731 passed, 934 skipped, 21 xfailed, 6 xpassed in 1252.28s (0:20:52) =
----------------------------- Captured stderr call -----------------------------  # <- back to stderr output for outer test suite
uh-oh, unmatched shift_free(ptr, 1) but allocated 8
uh-oh, unmatched shift_free(ptr, 1) but allocated 8
uh-oh, unmatched shift_free(ptr, 1) but allocated 8

[...]

uh-oh, unmatched shift_free(ptr, 1) but allocated 8
uh-oh, unmatched shift_free(ptr, 1) but allocated 8
=============================== warnings summary ===============================
../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/importlib/__init__.py:127
../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/importlib/__init__.py:127
../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/pypy3.9/importlib/__init__.py:127
  $PREFIX/lib/pypy3.9/importlib/__init__.py:127: UserWarning: The numpy.array_api submodule is still experimental. See NEP 47.
    return _bootstrap._gcd_import(name[level:], package, level)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================= slowest 50 durations =============================
1252.71s call     core/tests/test_mem_policy.py::test_new_policy
364.22s call     core/tests/test_mem_overlap.py::test_may_share_memory_easy_fuzz

[...]

=========================== short test summary info ============================
FAILED typing/tests/test_generic_alias.py::TestGenericAlias::test_pass[__dir__-<lambda>]
FAILED typing/tests/test_generic_alias.py::TestGenericAlias::test_raise[setattr-AttributeError-<lambda>0]
FAILED typing/tests/test_generic_alias.py::TestGenericAlias::test_raise[setattr-AttributeError-<lambda>1]
FAILED typing/tests/test_generic_alias.py::TestGenericAlias::test_pass[__repr__-<lambda>]
FAILED core/tests/test_mem_policy.py::test_new_policy - AssertionError: asser...
5 failed, 17974 passed, 1408 skipped, 30 xfailed, 36 xpassed, 3 warnings in 1453.58s (0:24:13)

@h-vetinari
Copy link
Member

Since test_new_policy is therefore counted as one test, I guess the problem is that it runs into the timeout and gets killed eventually:

1252.71s call     core/tests/test_mem_policy.py::test_new_policy

@h-vetinari
Copy link
Member

This was surfaced by switching to running the full test suite in #267; if we want, we can obviously also skip test_new_policy on pypy (or keep running just the label='fast' tests).

@mattip
Copy link

mattip commented Apr 7, 2022

  1. The test failures will be fixed by the patch in add patch fixing GenericAlias on 3.9 pypy3.6-feedstock#79
  2. I agree, running the test_new_policy tests can lead to confusing output. I am not sure how to work around that. Maybe there should be a way to run the complete test suite without test_new_policy, and only if that passes then run the test_new_policy test. I don't know if pytest supports that.

@h-vetinari
Copy link
Member

h-vetinari commented Apr 7, 2022

The test failures will be fixed by the patch in conda-forge/pypy3.6-feedstock#79

I'm aware :)

Maybe there should be a way to run the complete test suite without test_new_policy, and only if that passes then run the test_new_policy test. I don't know if pytest supports that.

We could pretty easily skip it using the existing tests_to_skip infrastructure, and then run that single test individually. Would result in two separate test suite invocations here. That said, with a high enough timeout, this is now passing. 🥳

I don't think pytest can do what you hypothesise, as it just collects the tests and runs them in order (i.e. not forming dependencies of "run A after B").

Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I think conda-forge/conda-forge-pinning-feedstock#2730 should be merged before this PR (but obviously I won't stand in the way if others disagree).

@mattip
Copy link

mattip commented Apr 7, 2022

conda-forge/conda-forge-pinning-feedstock#2730 was merged, but the aarch64 and ppc packages need to be built manually before this can pass cleanly

@h-vetinari
Copy link
Member

In addition to the aarch/ppc manual builds, this needs conda-forge/pypy-meta-feedstock#23

@mattip
Copy link

mattip commented Apr 10, 2022

@conda-forge-admin, please restart ci

@h-vetinari
Copy link
Member

Don't think the files are through the CDN yet (these days it seems to take ~2h most of the time; also #downloads>1 is a pretty reliable indicator).

@h-vetinari h-vetinari added the automerge Merge the PR when CI passes label Apr 10, 2022
@github-actions github-actions bot merged commit 541cf25 into conda-forge:main Apr 10, 2022
@github-actions
Copy link
Contributor

Hi! This is the friendly conda-forge automerge bot!

I considered the following status checks when analyzing this PR:

  • linter: passed
  • azure: passed

Thus the PR was passing and merged! Have a great day!

@regro-cf-autotick-bot regro-cf-autotick-bot deleted the rebuild-pypy38-0-1_h361179 branch April 10, 2022 06:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge Merge the PR when CI passes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants