New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
type promote clamp #77035
type promote clamp #77035
Conversation
🔗 Helpful links
✅ No Failures (16 Pending)As of commit e93afa7 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
@@ -12,6 +12,7 @@ struct ResultTypeState { | |||
}; | |||
|
|||
TORCH_API ResultTypeState update_result_type_state(const Tensor& tensor, const ResultTypeState& in_state); | |||
TORCH_API ResultTypeState update_result_type_state(const Scalar& scalar, const ResultTypeState& in_state); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice addition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Stamped!
@pytorchbot merge this |
Hey @ngimel. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be marked as BC-breaking?
@albanD good point |
Summary: Fixes #76630 When clamp(Tensor, Tensor) is structured, big parts of this PR won't be needed, but for now let's fix type promotion to make behavior more regular. Pull Request resolved: #77035 Approved by: https://github.com/mruberry Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/362525724bdba375defd6405cfe1b46a6ea222d3 Reviewed By: seemethere, osalpekar Differential Revision: D36250567 Pulled By: malfet fbshipit-source-id: 43571838f8e2ddb788b6a3230ec5271718450500
Fixes #76630
When clamp(Tensor, Tensor) is structured, big parts of this PR won't be needed, but for now let's fix type promotion to make behavior more regular.
Some special-casing is done in hardtanh to preserve legacy behavior of not promoting to boundaries.
Also, relu is explicitly disabled on bool inputs to avoid accidental promotion to integer type.
BC-breaking note:
Previously,
min
andmax
arguments in clamp did not participate in type promotion, which made it inconsistent withminimum
andmaximum
operations. This PR (and PRs porting clamp_min/clamp_max to structured) makemin
andmax
arguments participate in type promotion.