How can you generate a Convolution Operator with UINT8/INT8 output tensor? #4293
Unanswered
markcrowley1
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
I need to generate an ONNX model containing a single convolution operation with INT8 precision for the input, weight and output tensors.
I can't do this by simply using the ONNX Conv or ConvInteger operations, as Conv only supports float precision, and the output tensor of a ConvInteger operation is INT32.
Is there a simple way of achieving what I want without having to quantize the model? If quantizing the model is the only option, please let me know the simplest way of doing this.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions