Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call PyArray_Check only if NumPy is available #66433

Closed

Conversation

malfet
Copy link
Contributor

@malfet malfet commented Oct 11, 2021

Fixes #66353

@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/malfet/pytorch/blob/d7145f4dbf4cdcfbb5c8e93fa82d5fe14ef49c93/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Oct 11, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit d7145f4 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) (1/1)

Step: "Install nvidia driver, nvidia-docker runtime, set GPU_FLAG" (full log | diagnosis details | 🔁 rerun) ❄️

2021-10-11T19:55:11.2404340Z WARNING: infoROM is corrupted at gpu 0000:00:1E.0
2021-10-11T19:55:11.2120230Z |                               |                      |                  N/A |
2021-10-11T19:55:11.2121426Z +-------------------------------+----------------------+----------------------+
2021-10-11T19:55:11.2122026Z                                                                                
2021-10-11T19:55:11.2122816Z +-----------------------------------------------------------------------------+
2021-10-11T19:55:11.2123475Z | Processes:                                                                  |
2021-10-11T19:55:11.2124082Z |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
2021-10-11T19:55:11.2124700Z |        ID   ID                                                   Usage      |
2021-10-11T19:55:11.2125170Z |=============================================================================|
2021-10-11T19:55:11.2125687Z |  No running processes found                                                 |
2021-10-11T19:55:11.2126554Z +-----------------------------------------------------------------------------+
2021-10-11T19:55:11.2404340Z WARNING: infoROM is corrupted at gpu 0000:00:1E.0
2021-10-11T19:55:11.7252637Z ##[error]Process completed with exit code 1.
2021-10-11T19:55:11.7341372Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2021-10-11T19:55:11.7342723Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2021-10-11T19:55:11.7344040Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2021-10-11T19:55:11.7359575Z shell: /usr/bin/bash -e {0}
2021-10-11T19:55:11.7360089Z env:
2021-10-11T19:55:11.7360816Z   BUILD_ENVIRONMENT: linux-xenial-cuda11.3-py3.6-gcc7
2021-10-11T19:55:11.7362429Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7
2021-10-11T19:55:11.7364100Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-10-11T19:55:11.7365361Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@malfet malfet requested review from ezyang and a team October 11, 2021 18:23
@ptrblck
Copy link
Collaborator

ptrblck commented Oct 11, 2021

I was looking into the same lines of code, but couldn't narrow it down to a missing numpy check, since a source build wasn't failing there. Do you know, why the binaries were hitting the issue, while a source build (without numpy) was working fine?

@seemethere
Copy link
Member

I was looking into the same lines of code, but couldn't narrow it down to a missing numpy check, since a source build wasn't failing there. Do you know, why the binaries were hitting the issue, while a source build (without numpy) was working fine?

I guess the expectation with binary builds is that they always build with USE_NUMPY=1 while if you're building from source without numpy the expectation is that you would set it to USE_NUMPY=0

malfet added a commit to malfet/pytorch that referenced this pull request Oct 14, 2021
Summary:
Fixes pytorch#66353

Fixes #{issue number}

Pull Request resolved: pytorch#66433

Reviewed By: seemethere, janeyx99

Differential Revision: D31548290

Pulled By: malfet

fbshipit-source-id: 3b094bc8195d0392338e0bdc6df2f39587b85bb3
malfet added a commit that referenced this pull request Oct 14, 2021
Summary:
Fixes #66353

Fixes #{issue number}

Pull Request resolved: #66433

Reviewed By: seemethere, janeyx99

Differential Revision: D31548290

Pulled By: malfet

fbshipit-source-id: 3b094bc8195d0392338e0bdc6df2f39587b85bb3
wconstab pushed a commit that referenced this pull request Oct 20, 2021
Summary:
Fixes #66353

Fixes #{issue number}

Pull Request resolved: #66433

Reviewed By: seemethere, janeyx99

Differential Revision: D31548290

Pulled By: malfet

fbshipit-source-id: 3b094bc8195d0392338e0bdc6df2f39587b85bb3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Segfault in nightly binaries while creating a tensor form a list if numpy is missing
5 participants