Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert generator attached to Sampler back to lazily construction #65926

Merged
merged 1 commit into from Oct 8, 2021

Conversation

ejguan
Copy link
Contributor

@ejguan ejguan commented Sep 30, 2021

Summary:
Pull Request resolved: #63646

Fixes #63609

fbshipit-source-id: 550d77494326446d1a42b5da0559e0d384c47413

Summary:
Pull Request resolved: pytorch#63646

Fixes pytorch#63609

Test Plan: Imported from OSS

Reviewed By: NivekT

Differential Revision: D30451774

Pulled By: ejguan

fbshipit-source-id: 550d77494326446d1a42b5da0559e0d384c47413
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 30, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 34ecc7f (more details on the Dr. CI page):



🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.6-clang9 / test (noarch, 1, 1, linux.2xlarge) (1/3)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-09-30T18:24:00.5353118Z AssertionError: RuntimeError not raised
2021-09-30T18:24:00.5347855Z Traceback (most recent call last):
2021-09-30T18:24:00.5348477Z   File "/var/lib/jenkins/workspace/test/jit/test_tracer.py", line 244, in test_canonicalize_tensor_iterator
2021-09-30T18:24:00.5349463Z     self.assertTrue(str(traced.graph_for(x)).count(': int = prim::Constant') == 5)
2021-09-30T18:24:00.5350042Z AssertionError: False is not true
2021-09-30T18:24:00.5350394Z 		
2021-09-30T18:24:00.5350949Z ❌ Failure: jit.test_tracer.TestTracer.test_inplace_check
2021-09-30T18:24:00.5351349Z 
2021-09-30T18:24:00.5351683Z Traceback (most recent call last):
2021-09-30T18:24:00.5352239Z   File "/var/lib/jenkins/workspace/test/jit/test_tracer.py", line 342, in test_inplace_check
2021-09-30T18:24:00.5352726Z     ge(x)
2021-09-30T18:24:00.5353118Z AssertionError: RuntimeError not raised
2021-09-30T18:24:00.5353508Z 		
2021-09-30T18:24:00.5354234Z 🚨 ERROR: jit.test_freezing.TestMKLDNNReinplacing.test_always_alive_values
2021-09-30T18:24:00.5354787Z 
2021-09-30T18:24:00.5355127Z Traceback (most recent call last):
2021-09-30T18:24:00.5355708Z   File "/var/lib/jenkins/workspace/test/jit/test_freezing.py", line 2134, in test_always_alive_values
2021-09-30T18:24:00.5356336Z     self.checkResults(mod_eager, mod)
2021-09-30T18:24:00.5357040Z   File "/var/lib/jenkins/workspace/test/jit/test_freezing.py", line 2091, in checkResults
2021-09-30T18:24:00.5357641Z     self.assertEqual(mod1(inp), mod2(inp))
2021-09-30T18:24:00.5358424Z   File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
2021-09-30T18:24:00.5359019Z     return forward_call(*input, **kwargs)

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (distributed, 1, 1, linux.2xlarge) (2/3)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-09-30T18:55:19.6896188Z AssertionError: Fa...true : Scalars failed to compare as equal! -6 != 0
2021-09-30T18:55:19.6884971Z ❌ Failure: ProcessGroupGlooTest.test_allgather_coalesced_async
2021-09-30T18:55:19.6888805Z Traceback (most recent call last):
2021-09-30T18:55:19.6890383Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 418, in wrapper
2021-09-30T18:55:19.6891044Z     self._join_processes(fn)
2021-09-30T18:55:19.6891836Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 637, in _join_processes
2021-09-30T18:55:19.6892515Z     self._check_return_codes(elapsed_time)
2021-09-30T18:55:19.6893345Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 692, in _check_return_codes
2021-09-30T18:55:19.6893986Z     self.assertEqual(
2021-09-30T18:55:19.6894737Z   File "/opt/conda/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 1955, in assertEqual
2021-09-30T18:55:19.6895450Z     super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
2021-09-30T18:55:19.6896188Z AssertionError: False is not true : Scalars failed to compare as equal! -6 != 0
2021-09-30T18:55:19.6896892Z Expect process 3 exit code to match Process 0 exit code of 0, but got -6
2021-09-30T18:55:19.6897212Z 
2021-09-30T18:55:19.6897548Z ✅ 67 Passed
2021-09-30T18:55:19.6897925Z 💨 40 Skipped
2021-09-30T18:55:19.6898264Z 🚨 1 Failed
2021-09-30T18:55:19.7080245Z ##[group]Run # Remove any previous test reports if they exist
2021-09-30T18:55:19.7080868Z �[36;1m# Remove any previous test reports if they exist�[0m
2021-09-30T18:55:19.7081323Z �[36;1mrm -f test-reports-*.zip�[0m
2021-09-30T18:55:19.7081791Z �[36;1mzip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml'�[0m
2021-09-30T18:55:19.7092621Z shell: /usr/bin/bash -e {0}

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (3/3)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2021-09-30T18:20:56.7371491Z The PR is introduc...m to confirm whether this change is wanted or not.
2021-09-30T18:20:56.7358777Z processing existing schema:  alltoall_base(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor _1, Tensor _2, int[] _3, int[] _4) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7360097Z processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7361357Z processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7362617Z processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7363895Z processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7365114Z processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
2021-09-30T18:20:56.7366161Z processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (NoneType _0)
2021-09-30T18:20:56.7367529Z processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-09-30T18:20:56.7369023Z processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
2021-09-30T18:20:56.7370366Z processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
2021-09-30T18:20:56.7371491Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2021-09-30T18:20:56.7372102Z 
2021-09-30T18:20:56.7372348Z Broken ops: [
2021-09-30T18:20:56.7373694Z 	aten::_slow_conv2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, Tensor finput, *, Tensor(a!) grad_input, Tensor(b!) grad_weight, Tensor(c!) grad_bias) -> (Tensor(a!), Tensor(b!), Tensor(c!))
2021-09-30T18:20:56.7375377Z 	aten::_slow_conv2d_backward.output_mask(Tensor grad_output, Tensor self, Tensor weight, int[2] kernel_size, int[2] stride, int[2] padding, Tensor finput, bool[3] output_mask) -> (Tensor grad_input, Tensor grad_weight, Tensor grad_bias)
2021-09-30T18:20:56.7376747Z 	aten::_slow_conv2d_forward(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding) -> (Tensor output, Tensor finput)
2021-09-30T18:20:56.7378010Z 	aten::_slow_conv2d_forward.output(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding, *, Tensor(a!) output, Tensor(b!) finput) -> (Tensor(a!), Tensor(b!))
2021-09-30T18:20:56.7379140Z 	aten::_log_softmax_backward_data(Tensor grad_output, Tensor output, int dim, int input_dtype) -> (Tensor)
2021-09-30T18:20:56.7380137Z 	aten::_log_softmax_backward_data.out(Tensor grad_output, Tensor output, int dim, int input_dtype, *, Tensor(a!) out) -> (Tensor(a!))
2021-09-30T18:20:56.7380923Z 	prim::CudaFusionSizeEq(...) -> (bool)
2021-09-30T18:20:56.7381595Z 	prim::add_optional(Tensor(a) input, Tensor? bias) -> (Tensor(a))

❄️ 2 failures tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See GitHub Actions build win-vs2019-cuda11.3-py3 / build (1/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun) ❄️

2021-09-30T18:10:47.7768170Z caused by: An exis...rcibly closed by the remote host. (os error 10054)
2021-09-30T18:10:47.7700231Z error: failed to execute compile
2021-09-30T18:10:47.7700684Z caused by: Failed to send data to or receive data from server
2021-09-30T18:10:47.7701126Z caused by: Failed to read response header
2021-09-30T18:10:47.7701656Z caused by: An existing connection was forcibly closed by the remote host. (os error 10054)
2021-09-30T18:10:47.7722262Z [5391/6292] C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\bin\sccache-cl.exe   /TP -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DTORCH_CUDA_CPP_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cpp_EXPORTS -DNVRTC_SHORTHASH=c7f1618d -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\pybind11\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\magma\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/pytorch-1291900145/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CPP_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cpp.dir\__\aten\src\ATen\cuda\detail\LazyNVRTC.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cpp.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\
2021-09-30T18:10:47.7737175Z FAILED: caffe2/CMakeFiles/torch_cuda_cpp.dir/__/aten/src/ATen/cuda/detail/LazyNVRTC.cpp.obj 
2021-09-30T18:10:47.7751895Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\bin\sccache-cl.exe   /TP -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DTORCH_CUDA_CPP_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cpp_EXPORTS -DNVRTC_SHORTHASH=c7f1618d -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\cmake\..\third_party\pybind11\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\magma\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/pytorch-1291900145/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CPP_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cpp.dir\__\aten\src\ATen\cuda\detail\LazyNVRTC.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cpp.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\pytorch-1291
2021-09-30T18:10:47.7766707Z error: failed to execute compile
2021-09-30T18:10:47.7767170Z caused by: Failed to send data to or receive data from server
2021-09-30T18:10:47.7767639Z caused by: Failed to read response header
2021-09-30T18:10:47.7768170Z caused by: An existing connection was forcibly closed by the remote host. (os error 10054)
2021-09-30T18:10:47.7768712Z ninja: build stopped: subcommand failed.
2021-09-30T18:10:48.1970631Z -- Building version 1.10.0a0+gite0690f5
2021-09-30T18:10:48.1974178Z cmake -GNinja -DBUILD_ENVIRONMENT=win-vs2019-cuda11.3-py3 -DBUILD_PYTHON=True -DBUILD_SPLIT_CUDA=ON -DBUILD_TEST=True -DBUILD_TYPE=release -DBUILD_WHEEL=1 -DCMAKE_BUILD_TYPE=Release -DCMAKE_GENERATOR=Ninja -DCMAKE_INCLUDE_PATH=C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\mkl\include -DCMAKE_INSTALL_PREFIX=C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\torch -DCMAKE_PREFIX_PATH=C:\Jenkins\Miniconda3\Lib\site-packages -DCMAKE_VERBOSE_MAKEFILE=1 -DCUDA_NVCC_EXECUTABLE=C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145\build\win_tmp\bin\randomtemp.exe -DCUDNN_LIBRARY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\lib\x64 -DNUMPY_INCLUDE_DIR=C:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -DPYTHON_EXECUTABLE=C:\Jenkins\Miniconda3\python.exe -DPYTHON_INCLUDE_DIR=C:\Jenkins\Miniconda3\Include -DPYTHON_LIBRARY=C:\Jenkins\Miniconda3/libs/python38.lib -DTORCH_BUILD_VERSION=1.10.0a0+gite0690f5 -DUSE_CUDA=1 -DUSE_NUMPY=True C:\actions-runner\_work\pytorch\pytorch\pytorch-1291900145
2021-09-30T18:10:48.1977363Z cmake --build . --target install --config Release
2021-09-30T18:10:49.0051587Z + cleanup
2021-09-30T18:10:49.0089382Z + retcode=1
2021-09-30T18:10:49.0089661Z + set +x
2021-09-30T18:10:49.0838103Z ##[error]Process completed with exit code 1.
2021-09-30T18:10:49.2708386Z ##[group]Run actions/upload-artifact@v2
2021-09-30T18:10:49.2708879Z with:

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_test (2/2)

Step: "Check for no AVX instruction by default" (full log | diagnosis details | 🔁 rerun) ❄️

E: Failed to fetch https://deb.nodesource.com/n...: /etc/ssl/certs/ca-certificates.crt CRLfile: none
Ign:10 https://deb.nodesource.com/node_12.x xenial/main Sources
Ign:14 https://deb.nodesource.com/node_12.x xenial/main amd64 Packages
Ign:12 https://deb.nodesource.com/node_12.x xenial/main all Packages
Err:10 https://deb.nodesource.com/node_12.x xenial/main Sources
  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
Ign:14 https://deb.nodesource.com/node_12.x xenial/main amd64 Packages
Ign:12 https://deb.nodesource.com/node_12.x xenial/main all Packages
Fetched 4466 kB in 4s (1079 kB/s)
Reading package lists...
W: The repository 'https://deb.nodesource.com/node_12.x xenial Release' does not have a Release file.
E: Failed to fetch https://deb.nodesource.com/node_12.x/dists/xenial/main/source/Sources  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
E: Some index files failed to download. They have been ignored, or old ones used instead.


Exited with code exit status 100


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@ejguan ejguan changed the title Convert Sampler back to lazily construction (#63646) Convert generator attached to Sampler back to lazily construction Sep 30, 2021
@malfet malfet merged commit a27906c into pytorch:release/1.10 Oct 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants