-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor Dialect canonicalizer FoldTensorCastProducerOp
can produce invalid IR
#91265
Comments
…ith `bufferization.materialize_in_destination` Attempts to address a bug pointed out in llvm/llvm-project#91265 by relaxing the requirement for source/dest shapes to match in the `bufferization.materialize_in_destination` operation. The relaxation allows differences in static vs dynamic dims but still rejects cases where the shapes are statically known to be different.
Attempts to address a bug pointed out in llvm#91265 by moving the FoldTensorCastProducerOp canonicalizer definition upward into the MLIRDialectUtils library. Since the MLIRDialectUtils can't depend on any dialect, the canonicalizer had to change slightly, and a templated version is introduced. Then, we need to add this canonicalization routine where it was used before, except for places where it is incorrect as pointed out in the bug. Based on cursory inspection of the TableGen definitions, only `bufferization.materialize_in_destination` should *not* have the canonicalizer, but existing tests passed if the canonicalizer as only added for `tensor.pack|unpack|extract_slice` and the LinalgOp interface. I went ahead and added tests where they were missing.
Attempts to address a bug pointed out in llvm#91265 by moving the FoldTensorCastProducerOp canonicalizer definition upward into the MLIRDialectUtils library. Since the MLIRDialectUtils can't depend on any dialect, the canonicalizer had to change slightly, and a templated version is introduced. Then, we need to add this canonicalization routine where it was used before, except for places where it is incorrect as pointed out in the bug. Based on cursory inspection of the TableGen definitions, only `bufferization.materialize_in_destination` should *not* have the canonicalizer, but existing tests passed if the canonicalizer as only added for `tensor.pack|unpack|extract_slice` and the LinalgOp interface.
Attempts to address a bug pointed out in llvm#91265 by moving the FoldTensorCastProducerOp canonicalizer definition upward into the MLIRDialectUtils library. Since the MLIRDialectUtils can't depend on any dialect, the canonicalizer had to change slightly, and a templated version is introduced. Then, we need to add this canonicalization routine where it was used before, except for places where it is incorrect as pointed out in the bug. Based on cursory inspection of the TableGen definitions, only `bufferization.materialize_in_destination` should *not* have the canonicalizer, but existing tests passed if the canonicalizer as only added for `tensor.pack|unpack|extract_slice` and the LinalgOp interface.
…ze_in_destination` op This commit relaxes the verifier of `bufferization.materialize_in_destination` such that mixed static/dynamic dimensions are allowed for the source and destination operands. E.g., `tensor<5xf32>` and `tensor<?xf32>` are now compatible, but it is assumed that the dynamic dimension is `5` at runtime. This commit fixes #91265.
#92681 fixes the issue for (Generalizing |
This looks more like a workaround than a fix to me: a canonicalized shouldn't change the type of an SSA value without more information |
I agree, it's likely still broken for other ops. I tried to make this pattern opt-in for each op that supports it. I.e., only ops that support the pattern should add it in their This requires a few more cleanups first, so I didn't get to it yet. E.g., |
@matthias-springer @joker-eph I opened PR #91382 two weeks ago to do just this. That PR makes the pattern opt-in. |
Do you mean #91382? I think there's a simpler solution (without that many templates): Add an additional Then have a public function that populates the pattern for a given op name or interface type ID. The "populate function" can then be called from the |
If you have time, I'd recommend two PRs: One that makes the canonicalizer of |
Oops, yeah thanks.
Ok, should be able to work on it on Friday |
…ze_in_destination` op (#92681) This commit relaxes the verifier of `bufferization.materialize_in_destination` such that mixed static/dynamic dimensions are allowed for the source and destination operands. E.g., `tensor<5xf32>` and `tensor<?xf32>` are now compatible, but it is assumed that the dynamic dimension is `5` at runtime. This commit fixes #91265.
…ze_in_destination` op (llvm#92681) This commit relaxes the verifier of `bufferization.materialize_in_destination` such that mixed static/dynamic dimensions are allowed for the source and destination operands. E.g., `tensor<5xf32>` and `tensor<?xf32>` are now compatible, but it is assumed that the dynamic dimension is `5` at runtime. This commit fixes llvm#91265.
Reproducer:
FoldTensorCastProducerOp
defined inlib/Dialect/Tensor/IR/TensorOps.cpp
assumes that any operand of an op that implementsDestinationStyleOpInterface
can absorb atensor.cast
operation that is erasing information from the type. However, that assumption is not encoded anywhere in the spec ofDestinationStyleOpInterface. For example,
bufferization.materialize_in_destination` requires that the shapes of its DPS input and DPS output operands match, and therefore the canonicalizer will produce IR that fails verification :The text was updated successfully, but these errors were encountered: