You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am pretty new to onnx and I need help with converting yolov7 model to onnx and running in on Android. I've got an app which runs onnx models successfully but I struggle in converting yolov7.
At this point I've got a yolov7 with custom weights and I can confirm in python that it works after conversion to onnx and even after quantization. I am using model from https://github.com/WongKinYiu/yolov7
I'm trying opset=12.
Non-Quantized model yields:
2022-09-30 16:20:20.191 19393-19393/? E/ModelBuilder: Invalid Operation: NN_RET_CHECK failed (packages/modules/NeuralNetworks/common/types/operations/src/Activation.cpp:46): getNumberOfDimensions(input) <= 4u (getNumberOfDimensions(input) = 5, 4u = 4)
2022-09-30 16:20:20.200 19393-19393/? E/libc++abi: terminating with uncaught exception of type Ort::Exception: Not satisfied: ret == ANEURALNETWORKS_NO_ERROR
model_builder.cc:499 AddOperationResultCode: ANEURALNETWORKS_BAD_DATA, op = 14
2022-09-30 16:20:20.200 19393-19393/? A/libc: Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 19393 (lay.onnxruntime), pid 19393 (lay.onnxruntime)
Quantized model yields:
2022-09-30 16:26:17.052 20402-20402/? E/libc++abi: terminating with uncaught exception of type Ort::Exception: Load model from /data/local/tmp/best_qdq.onnx failed:/Users/rohithkvsp/Desktop/MS_HACKATON/onnxruntime/onnxruntime/core/graph/model_load_utils.h:57 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::string, int> &, const logging::Logger &, bool, const std::string &, int) ONNX Runtime only *guarantees* support for models stamped with official released onnx opset versions. Opset 3 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx.ml is till opset 2.
2022-09-30 16:26:17.052 20402-20402/? A/libc: Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 20402 (lay.onnxruntime), pid 20402 (lay.onnxruntime)
The latter is especially interesting because I'm using opset=12 not opset=3. I've changed opset to 13 and then the error is the same but says that 13 is not supported but 12 is.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi!
I am pretty new to onnx and I need help with converting yolov7 model to onnx and running in on Android. I've got an app which runs onnx models successfully but I struggle in converting yolov7.
At this point I've got a yolov7 with custom weights and I can confirm in python that it works after conversion to onnx and even after quantization. I am using model from https://github.com/WongKinYiu/yolov7
I'm trying opset=12.
Non-Quantized model yields:
Quantized model yields:
The latter is especially interesting because I'm using opset=12 not opset=3. I've changed opset to 13 and then the error is the same but says that 13 is not supported but 12 is.
Beta Was this translation helpful? Give feedback.
All reactions