Building ONNX model on external inference engine. #5436
Unanswered
pratkpranav
asked this question in
Q&A
Replies: 1 comment
-
onnx implements a python runtime ReferenceEvaluator, onnxruntime implements a C++ runtime. Both can be extended to support the implementation of a custom operator:
onnxruntime-extensions is leveraging onnxruntime custom ops API to extend onnxruntime with many text operators: https://github.com/microsoft/onnxruntime-extensions. You can also find more examples here: https://github.com/xadupre/onnx-extended/tree/main/onnx_extended/ortops/tutorial. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Everyone! I was going through this documentation, where it refers that we can load a binary external machine learning engine which ONNX does not support, and then being able to use ONNX model for it, which can be used for inference as ONNX runtime. So, is this the case, or does the model need to be(like a torch, Tensorflow) that Onnx supports? If yes, does it requires the input and output to be tensors in this case too, and could you refer me to an example where using the wrapper, we can extend a machine-learning library for inference in ONNX?
Beta Was this translation helpful? Give feedback.
All reactions