Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DISABLED test_buffer_mutation_3_abi_compatible_cuda (__main__.AOTInductorTestABICompatibleCuda) #123251

Open
atalman opened this issue Apr 3, 2024 · 6 comments
Labels
module: aotinductor aot inductor module: rocm AMD GPU support for Pytorch oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@atalman
Copy link
Contributor

atalman commented Apr 3, 2024

Platforms: rocm

This test was disabled because it is failing on main branch (recent examples).

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @desertfire @chenyang78

@pytorch-bot pytorch-bot bot added the skipped Denotes a (flaky) test currently skipped in CI. label Apr 3, 2024
Copy link

pytorch-bot bot commented Apr 3, 2024

Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
  • Test name: test_buffer_mutation_3_abi_compatible_cuda (__main__.AOTInductorTestABICompatibleCuda)
  • Platforms for which to skip the test: rocm
  • Disabled by atalman

Within ~15 minutes, test_buffer_mutation_3_abi_compatible_cuda (__main__.AOTInductorTestABICompatibleCuda) will be disabled in PyTorch CI for these platforms: rocm. Please verify that your test name looks correct, e.g., test_cuda_assert_async (__main__.TestCuda).

To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.

Platforms: case-insensitive, list, of, platforms

We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.

@atalman
Copy link
Contributor Author

atalman commented Apr 3, 2024

Caused by: #123164
Re-enable once this is resolved

@pytorch-bot pytorch-bot bot added the module: rocm AMD GPU support for Pytorch label Apr 3, 2024
@jithunnair-amd
Copy link
Collaborator

@atalman This might not be the only unit test introduced by that PR and failing for ROCm. I just cc'ed you on a comment I posted on that PR: #123164 (comment) to see how we can mitigate this in the future, since AOTI feature seems to be in heavy development.

@shunting314 shunting314 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 4, 2024
@shunting314
Copy link
Contributor

cc @desertfire

Copy link

pytorch-bot bot commented Apr 26, 2024

Resolving the issue because the test is not flaky anymore after 700 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive

@pytorch-bot pytorch-bot bot closed this as completed Apr 26, 2024
@anijain2305
Copy link
Contributor

Test is flaky, reopening.

@anijain2305 anijain2305 reopened this Apr 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: aotinductor aot inductor module: rocm AMD GPU support for Pytorch oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants