Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix for SimplifiedLayerNormalization #12975

Merged
merged 3 commits into from Sep 27, 2022
Merged

Bugfix for SimplifiedLayerNormalization #12975

merged 3 commits into from Sep 27, 2022

Conversation

er3x3
Copy link
Contributor

@er3x3 er3x3 commented Sep 15, 2022

This PR is to fix #12930 and #12579.

In detail:

  • For CPU EP, since current impl of SimplifiedLayerNormalization doesn't support input and scale having different data types, so if the sub-graph contains Cast Op, the sub-graph will not fused, this guarantee that both inputs and output data type will be same
  • For CUDA EP, add (fp16, float) support to (T,V) type constraints all combinations of fp16 and float can be supported in the impl

With the fix, the original model can be run with SimplifiedLayerNormalization, which also helps to improve the perf.

@er3x3 er3x3 added the core runtime issues related to core runtime label Sep 15, 2022
// If it's not GPU EP, since the CPU impl for SimplifiedLayerNormalization doesn't support input and scale
// having different types for now, and it may also have conflict to InsertCastTransformer,
// so the sub-graph will not be fused if it contains Cast Op.
bool is_gpu_ep = pow_node.GetExecutionProviderType() == kCudaExecutionProvider ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain this change a bit more? Based on the comment I would have expected the code to look for a Cast and exit if the CPU EP was involved.

I don't quite understand why we change the first branch which seems to be about setting has_leading_cast and a second location which has nothing to do with has_leading_cast.

Copy link
Contributor Author

@er3x3 er3x3 Sep 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are only 4 possible cases (x=Pow->ReduceMean->Add->Sqrt->Div, and y=Mul):
(1) cast(to:float)->x->cast(to:fp16)->y : SimplifiedLayerNorm(T:fp16,V:fp16)
(2) cast(to:float)->x->y : SimplifiedLayerNorm(T:fp16,V:float)
(3) x->cast(to:fp16)->y : SimplifiedLayerNorm(T:float,V:fp16)
(4) x->y : SimplifiedlayerNorm(T:float,V:float)

They all work for CUDA EP.

For CPU EP, we have only SimplifiedlayerNorm(T:float,V:float), so only (4) works. But if for (1) and (2), if we just treat the entry cast as a normal node, means has_leading_cast is always false, then for (2), we can still fuse it to "cast(to:float)->SimplifiedlayerNorm(T:float,V:float)" (just like applying (4) to the x->y after cast), so the condition for CPU EP to fuse or not is always set has_leading_cast to false and check if there is a cast between x and y. Having cast between means cannot fuse.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be great to put this excellent explanation in the comment so it's captured for the next person who works on the code.

ytaous
ytaous previously approved these changes Sep 26, 2022
skottmckay
skottmckay previously approved these changes Sep 26, 2022
@er3x3 er3x3 dismissed stale reviews from skottmckay and ytaous via 5aa19cb September 27, 2022 03:30
@er3x3 er3x3 merged commit 94e34ac into main Sep 27, 2022
@er3x3 er3x3 deleted the weicwang/sim_layer_norm branch September 27, 2022 06:24
linnealovespie pushed a commit that referenced this pull request Sep 30, 2022
This PR is to fix #12930
and #12579.

In detail:
- For CPU EP, since current impl of SimplifiedLayerNormalization doesn't
support input and scale having different data types, so if the sub-graph
contains Cast Op, the sub-graph will not fused, this guarantee that both
inputs and output data type will be same
- For CUDA EP, add (fp16, float) support to (T,V) type constraints all
combinations of fp16 and float can be supported in the impl

With the fix, the original model can be run with
SimplifiedLayerNormalization, which also helps to improve the perf.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core runtime issues related to core runtime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optimizer adds incompatible node
3 participants