New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to swap a set of parameters inside of an .onnx / .ort graph with an identically shaped set of parameters? #6090
Comments
If you use external-data format, you can replace the data file representing external tensors with new values, as you wish. Alternatively, you can make the weights as input parameters of the model, and then vary them as you wish for each invocation. However, this will incur a performance penalty (potentially huge) if ort has to do things like move the weight to gpu or transpose them etc. (which will be done once at session creation if the weights are not inputs). |
@gramalingam After reading the docs and tinkering with some of those functions, I am still not sure I quite understand the purpose of the external-data format, or if it would be compatible for the onnxruntime API (as opposed to onnx). What is the purpose of the format, and could you provide psuedo code to show how to load a subset of params with onnxruntime? |
Yes, onnxruntime also supports the external-data format, which is part of the onnx standard. The external-data format serves a couple of purposes. First, the protobuf format has a limit of 2GB on the size of a protobuf object (in terms of the size of the serialized representation). Models which exceed this size can exploit the external-data format to get around this limitation. Second, even if the model size is less than 2GB, weights end up dominating the size of the model representation. Hence, it is convenient and efficient to load these weights only if required. It helps analysis/optimization tools that care about the graph, and not so much about the weights. |
Is there a way to specify which parameters in the graph to load weights into? Or this capability doesn't exist yet |
In my application I adding an initializer with the
Because I am not supplying a model path (I'm initializing from an array), does this imply the two methods or not compatible? |
Ask a Question
Question
I want to be able to swap params at inference time to facilitate a LoRA deployment.
Eg. in torch, I could do
Notes
I am using an ORT file for inference if that matters
The text was updated successfully, but these errors were encountered: