Replies: 1 comment
-
Hi @miayu79, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
model signature: model(input_ids_1, input_ids_2, valid_length_1, valid_length_2, x = None, y = False, z = None, dict=None)
code to convert:
optional_input = {"dict":dict_values}
dummy_input = (input_ids_1, input_ids_2, valid_length_1, valid_length_2, optional_input)
torch.onnx.export(model, dummy_input, model_onnx_path, opset_version=14, input_names=["input_ids_1", "input_ids_2", "valid_length_1", "valid_length_2", "dict"], output_names=["output"])
Issue: When inferencing on onnx model, it expect me to give > 5 inputs, all the key value pair in the dictionary are interpreted as extra parameters.
For example:
if dict = {
"xx": ***,
"yy": ***,
"zz": ***
}
the exported onnx model expect the inputs to be ("input_ids_1", "input_ids_2", "valid_length_1", "valid_length_2", "xx", "yy", "zz") instead of ("input_ids_1", "input_ids_2", "valid_length_1", "valid_length_2", "dict")
I tried to compare results between pytorch and onnx, and notice something weird, if I used the defaulted dictionary values(the default values I used to export onnx), the inference results for pytorch and onnx are the same. If I use a different dictionary values than the defaulted values, the inference results differs. I suspect is that due to some limitations in pytorch onnx conversion which caused this? or did I do something wrong during exporting? Thanks for the help in advance..
Versions
14
Beta Was this translation helpful? Give feedback.
All reactions