New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❓ [Question] How to solve this warning: Detected this engine is being instantitated in a multi-GPU system with multi-device safe mode disabled. #2813
Comments
Moreover, when the model infers using forward function, there are these warnings:
|
Hi @demuxin - thanks for the report - we likely need to add TensorRT/core/runtime/runtime.cpp Line 10 in 07c5b07
Similar to the following functions: TensorRT/core/runtime/register_jit_hooks.cpp Lines 121 to 122 in 07c5b07
We are aware of the default stream warning and are working on this. It should not have a substantial effect on inference from what I've seen |
These may actually already be accessible in C++, prior to inference, could you try adding the line: |
Hi @gs-olive, this is not right, there is complie error:
|
Does |
This is a similar compilation error.
I searched the libtorch and torch-TensorRT header files, and there are no functions related to multi_device_safe_mode. |
❓ Question
I used Torch-TensorRT to compile the torchscript model in C++. When compiling or loading torchtrt model, it displays many warnings.
What you have already tried
I found this link is useful, but it only provides Python API.
I checked the source code, but I still haven't figured out how to set up MULTI_DEVICE_SAFE_MODE in C++.
What can I do to address this warning?
Environment
conda
,pip
,libtorch
, source): libtorchThe text was updated successfully, but these errors were encountered: