-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add numerical test comparing BetterDecoder and fairseq decoder #79576
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit b4d1fbc (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
This pull request was exported from Phabricator. Differential Revision: D37157391 |
Corresponding test for #79438 |
This pull request was exported from Phabricator. Differential Revision: D37157391 |
122fa2a
to
0032648
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks!
@zrphercule Can you also approve #79438? It's the PR this one is stacked on top of. |
This pull request was exported from Phabricator. Differential Revision: D37157391 |
0032648
to
07563bf
Compare
@pytorchbot merge -g |
@pytorchbot successfully started a merge job. Check the current status here |
Merge failed due to This PR has internal changes and must be landed via Phabricator |
…ch#79576) Summary: Pull Request resolved: pytorch#79576 Includes both forced and incremental decoding. Add a new file test_transformers.py to put transformers tests in and move away from huge monolithic test_nn.py. A todo item is to move existing transformer tests from test_nn.py to test_transformers.py. Add a numerical test comparing torch.nn._transformer_decoder_layer_fwd and fairseq decoder. Both decoders use the weights of a common nn.TransformerEncoder. Test Plan: ``` buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace //caffe2/test:transformers ./fbcode/buck-out/gen/caffe2/test/transformers#binary.par ``` Test runs and passes! Differential Revision: D37157391 fbshipit-source-id: 3b3f1c7fdac8269278982e0dcc2d32ff6b63547d
…ch#79576) Summary: Pull Request resolved: pytorch#79576 Includes both forced and incremental decoding. Add a new file test_transformers.py to put transformers tests in and move away from huge monolithic test_nn.py. A todo item is to move existing transformer tests from test_nn.py to test_transformers.py. Add a numerical test comparing torch.nn._transformer_decoder_layer_fwd and fairseq decoder. Both decoders use the weights of a common nn.TransformerEncoder. Test Plan: ``` buck build --show-output mode/opt -c fbcode.enable_gpu_sections=true -c fbcode.nvcc_arch=a100 mode/inplace //caffe2/test:transformers ./fbcode/buck-out/gen/caffe2/test/transformers#binary.par ``` Test runs and passes! Reviewed By: mikekgfb Differential Revision: D37157391 fbshipit-source-id: 18ff84ba22e8b92208d4af97f266df0752c80d54
07563bf
to
b4d1fbc
Compare
This pull request was exported from Phabricator. Differential Revision: D37157391 |
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here |
Merge failed due to This PR has internal changes and must be landed via Phabricator |
Closing because code in this PR already committed in #79796 |
Summary:
Add a new file test_transformers.py to put transformers tests in and move away from huge monolithic test_nn.py. A todo item is to move existing transformer tests from test_nn.py to test_transformers.py.
Add a numerical test comparing torch.nn._transformer_decoder_layer_fwd and fairseq decoder. Both decoders use the weights of a common nn.TransformerEncoder. Contains both forced decoding and incremental decoding
Stacked on top of #79438
Test Plan:
Test runs and passes!
Differential Revision: D37157391