Replies: 1 comment
-
Your model is concatenating both inputs on the first dimension because they only have one and you set axis=0. If you need two features, you should change the input shape or reshape every input with shape (-1, 1) and then concatenate them on the second axis (axis=1). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have an xgboost model converted to TreeEnsembleRegressor.
The model accepts two input features (f0, f1).
A preprocessing manipulation drives the two input features.
Since TreeEnsembleRegressor node only accepts one input tensor, I tried using Concat to combine the two features in a single tensor before passing it to the TreeEnsembleRegressor node but it resulted in an error msg:
"RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running TreeEnsembleRegressor node. Name:'TreeEnsembleRegressor' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/ml/tree_ensemble_common.h:418 void onnxruntime::ml::detail::TreeEnsembleCommon<InputType, ThresholdType, OutputType>::ComputeAgg(onnxruntime::concurrency::ThreadPool*, const onnxruntime::Tensor*, onnxruntime::Tensor*, onnxruntime::Tensor*, const AGG&) const [with AGG = onnxruntime::ml::detail::TreeAggregatorSum<float, float, float>; InputType = float; ThresholdType = float; OutputType = float] One path in the graph requests feature 1 but input tensor has 1 features."
I also tried reshaping the TreeEnsembleRegressor inputs with Expand or Reshape op but got the same results.
I get a successful answer if I manually take the concat output and pass it to the TreeEnsembleRegressor model.
How can I overcome this issue?
code:
#Input & Output tensors
f0 = onnx.helper.make_tensor_value_info('f0', onnx.TensorProto.FLOAT, [1])
f1 = onnx.helper.make_tensor_value_info('f1', onnx.TensorProto.FLOAT, [1])
concat_out = onnx.helper.make_tensor_value_info('concat_out', onnx.TensorProto.FLOAT, [2])
concat_node = onnx.helper.make_node("Concat", inputs= ["f0", "f1"], outputs= ["concat_out"], axis=0, name = 'concat_node')
#Concat model
concat_graph = onnx.helper.make_graph(nodes=[concat_node], name='concat_m', inputs=[f0, f1], outputs=[concat_out])
concat_model = onnx.helper.make_model(concat_graph, opset_imports=[onnx.helper.make_opsetid('', 18)])
onnx.save(concat_model, 'concat_model.onnx')
ses = ort.InferenceSession('/content/concat_model.onnx')
res = ses.run(None, {'f0': np.array([111], dtype=np.float32), 'f1': np.array([12], dtype=np.float32)}) ##res[0] reuturns:[array([111., 12.], dtype=float32)] as expected
#Score model (xgBoost)
score_model = onnx.load('/content/diamonds_onnx_model.onnx')
concat_model.ir_version = 8
#Merged model
comp_model = onnx.compose.merge_models(concat_model, score_model, io_map=[('concat_out','feature_input')])
inf_comp_model = onnx.shape_inference.infer_shapes(comp_model)
onnx.save(inf_comp_model, 'inf_comp_model.onnx')
comp_ses = ort.InferenceSession('/content/inf_comp_model.onnx')
res = comp_ses.run(None, {'f0': np.array([111], dtype=np.float32), 'f1': np.array([12], dtype=np.float32)})
full error:
RuntimeException Traceback (most recent call last)
in <cell line: 18>()
16
17 comp_ses = ort.InferenceSession('/content/inf_comp_model.onnx')
---> 18 res = comp_ses.run(None, {'f0': np.array([111], dtype=np.float32), 'f1': np.array([12], dtype=np.float32)})
/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
215 output_names = [output.name for output in self._outputs_meta]
216 try:
--> 217 return self._sess.run(output_names, input_feed, run_options)
218 except C.EPFail as err:
219 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running TreeEnsembleRegressor node. Name:'TreeEnsembleRegressor' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/ml/tree_ensemble_common.h:418 void onnxruntime::ml::detail::TreeEnsembleCommon<InputType, ThresholdType, OutputType>::ComputeAgg(onnxruntime::concurrency::ThreadPool*, const onnxruntime::Tensor*, onnxruntime::Tensor*, onnxruntime::Tensor*, const AGG&) const [with AGG = onnxruntime::ml::detail::TreeAggregatorSum<float, float, float>; InputType = float; ThresholdType = float; OutputType = float] One path in the graph requests feature 1 but input tensor has 1 features.
Beta Was this translation helpful? Give feedback.
All reactions