Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JIT] Nested fix #79480

Closed
wants to merge 1 commit into from
Closed

[JIT] Nested fix #79480

wants to merge 1 commit into from

Conversation

eellison
Copy link
Contributor

Fixes #ISSUE_NUMBER

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jun 14, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit c534537 (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jun 14, 2022
@eellison
Copy link
Contributor Author

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased nested_fix onto refs/remotes/origin/master, please pull locally before adding more changes (for example, via git checkout nested_fix && git pull --rebase)

@erichan1
Copy link
Member

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link

Hey @eellison.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@erichan1 erichan1 added release notes: jit release notes category topic: bug fixes topic category labels Jun 17, 2022
erichan1 added a commit to erichan1/pytorch that referenced this pull request Jun 17, 2022
Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix pytorch#79480

Test Plan:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

fbshipit-source-id: 670c58a8570b7bf459c6aeb1f11800de0dba6584
@erichan1
Copy link
Member

erichan1 commented Jun 17, 2022

Noting that this is fix for Torchscript + NestedTensor. Associated test for this is here #79796 (this is mainly applied for Torchscript + NestedTensor + transformerencoder, which was broken before).

pytorchmergebot pushed a commit that referenced this pull request Jun 17, 2022
#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Test Plan:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule
erichan1 pushed a commit that referenced this pull request Jun 17, 2022
Fixes #ISSUE_NUMBER

Pull Request resolved: #79480
Approved by: https://github.com/davidberard98
malfet pushed a commit that referenced this pull request Jun 20, 2022
Fixes #ISSUE_NUMBER

Pull Request resolved: #79480
Approved by: https://github.com/davidberard98

Co-authored-by: Elias Ellison <eellison@fb.com>
facebook-github-bot pushed a commit that referenced this pull request Jun 20, 2022
Summary:
Fixes #ISSUE_NUMBER

Pull Request resolved: #79480
Approved by: https://github.com/davidberard98

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/43aff84c84929e6d1c75a750ab5a6b9eda6a2b9a

Reviewed By: malfet

Differential Revision: D37242164

fbshipit-source-id: a6721af5e10dd36a9b962a1ed4f3235053444c46
facebook-github-bot pushed a commit that referenced this pull request Jun 20, 2022
#79796) (#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/06274d7a487bf7995da77b9df9b5c1f7dc13f35b

Test plan from GitHub:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pulled By: erichan1

fbshipit-source-id: 5a16e7d240cb51c0a613d16a79931d41122aba8b
erichan1 added a commit that referenced this pull request Jul 21, 2022
#79796) (#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/06274d7a487bf7995da77b9df9b5c1f7dc13f35b

Test plan from GitHub:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pulled By: erichan1

fbshipit-source-id: 5a16e7d240cb51c0a613d16a79931d41122aba8b
erichan1 added a commit that referenced this pull request Jul 21, 2022
#79796) (#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/06274d7a487bf7995da77b9df9b5c1f7dc13f35b

Test plan from GitHub:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pulled By: erichan1

fbshipit-source-id: 5a16e7d240cb51c0a613d16a79931d41122aba8b
erichan1 added a commit that referenced this pull request Jul 21, 2022
#79796) (#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/06274d7a487bf7995da77b9df9b5c1f7dc13f35b

Test plan from GitHub:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pulled By: erichan1

fbshipit-source-id: 5a16e7d240cb51c0a613d16a79931d41122aba8b
atalman pushed a commit that referenced this pull request Jul 25, 2022
* Add test for torchscripting nn.TransformerEncoder, including fast path (#79796) (#79796)

Summary:
Add test just to check if TransformerEncoder will crash when enumerating over params [with_no_grad, use_torchscript, training].

Motivation for this was that TransformerEncoder fast path (so with_no_grad=True) and use_torchscript=True would crash with the issue that NestedTensor doesn't have size. This was caused because the TransformerEncoder fast path generates a NestedTensor automatically as a perf optimization and torchscript attempts to find intermediate tensor sizes while it optimizes. But NestedTensor has not implemented a size method, so things fail.

This test goes together with this fix #79480

Pull Request resolved: #79796
Approved by: https://github.com/zrphercule

Test Plan:
contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/06274d7a487bf7995da77b9df9b5c1f7dc13f35b

Test plan from GitHub:
```
buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace  //caffe2/test:transformers

./fbcode/buck-out/gen/caffe2/test/transformers#binary.par
```
Test runs and passes together with the changes from the PR above (I made another diff on top of this with those changes). Does not pass without the fix.

Reviewed By: mikekgfb

Differential Revision: D37222923

Pulled By: erichan1

fbshipit-source-id: 5a16e7d240cb51c0a613d16a79931d41122aba8b

* disable src mask for transformer and multiheadattention fastpath (#81277) (#81277)

Summary:
Disable fastpath if src_mask passed to TransformerEncoderLayer and MultiheadAttention.
- Refactored test_transformerencoder from test_nn.py to test_transformers.py. Added a src_mask test there.
- Added a specific src_mask test in test_transformers.py

Fixes #81129

Pull Request resolved: #81277
Approved by: https://github.com/zrphercule

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/23088fcfdf77632d4e6db4d35ce62735ca6622d2

Reviewed By: DanilBaibak

Differential Revision: D37919513

Pulled By: erichan1

fbshipit-source-id: 0697d789634775136897fdb6a310356a6a45030d

* remove decoder tests for feature not in 1.12

* remove unnecessary changes from #77903 to make changes more minimal
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue release notes: jit release notes category topic: bug fixes topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants