Replies: 7 comments
-
There is a solution to this problem which keeps ONNX free to make opset changes without placing a burden on all external converters and runtimes: having good coverage for the ONNX version converter. This is my issue about the problem: #2414 |
Beta Was this translation helpful? Give feedback.
-
@onnx/sig-operators : For comments @mathisdon : Can you explain what do you mean by this: |
Beta Was this translation helpful? Give feedback.
-
In many cases the new version of the operator is not really changing the logic of the operator much (or at all). In those cases, there doesn't have to be much additional work in ONNX tooling. In other cases we are indeed changing the specification to extend its capabilities. We try not to make unnecessary changes, but changes are unavoidable in some cases. |
Beta Was this translation helpful? Give feedback.
-
I meant that, for authors of backends and converters, the different versions of an operator cannot just be treated as "updates" that replace the old version. Instead, all old versions must be supported because there may be many models out in the wild that include them. So, when authoring a backend or converter, the developer must implement, for example, all 5 versions of the Clip operator, all 6 versions of Dropout, all 6 versions of Gemm, etc. The rate of new opsets appears to be accelerating. I hope onnx developers are not excitedly creating an all-new opset 14 at this time. If onnx is intended to be a common language that serves as a standard IR for converting between deep learning frameworks, rapidly proliferating new versions of the "standard" operators severely undermines that mission. |
Beta Was this translation helpful? Give feedback.
-
I totally agree with you on "ONNX should NOT update the "standard" operators" that frequently. Ideally, "standard" ops MUST be stable. However, new requirements (user voices) should also be listened :). Any kind of help on controlling the bar of this standard is appreciated. Looking back, most of the version bump (for one existing standard operator) is because either adding more types' support or moving one static input (attribute) to dynamic (input). This kind of changes will be less and less (as most of them should be covered already). Looking ahead, new requirements of "operators" may only be considered if and only if,
Operator SIG has defined some documents in detail on the criteria of adding/updating an operator in ONNX. |
Beta Was this translation helpful? Give feedback.
-
#2925 is a change to help reduce opset bumps in at least one case. Please review |
Beta Was this translation helpful? Give feedback.
-
I find it really tricky on setting opset value. The following simple code (run in jupyter) works for opset=7 not higher values. Anybody knows why? import tensorflow as tf
import onnx_graphsurgeon as gs
import onnx
import numpy as np
saved_model_path='/usr/tmp/model1'
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten
from tensorflow.keras import Model
input_shape = (30, 30,3)
inputs = Input(shape=input_shape,name='input')
x = Conv2D(2, 3, activation='relu')(inputs)
out= Dense(400, activation='relu',name='output')(x)
model = Model(inputs=inputs, outputs=[out])
tf.saved_model.save(model1,saved_model_path)
!python3 -m tf2onnx.convert --saved-model {saved_model_path} --opset 7 --output {saved_model_path+'.onnx'}
!/usr/src/tensorrt/bin/trtexec --onnx={saved_model_path+'.onnx'} --saveEngine=model.trt For higher values, it throw error : ERROR: ModelImporter.cpp:296 In function importModel: It is very hard to understand what does this error message means. |
Beta Was this translation helpful? Give feedback.
-
My opinion:
There is a cost to continually introducing new versions of existing operators: People writing runtimes and converters must then support all versions of your operators, not just the new ones. All the links on your operators page (https://github.com/onnx/onnx/blob/master/docs/Operators.md) are not really versions of the same operator, they are additional operators that must all be supported. By continually creating more and more version of operators, you are making it harder and harder to support ONNX and risk losing users and partners.
Beta Was this translation helpful? Give feedback.
All reactions