You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been following the process in the excellent "github.com/dusty-nv/jetson-inferenceblob/master/docs/pytorch-ssd.md" on my jetson orin nano devkit. I've run the training for detecting an object and it worked well, producing 30 epochs of checkpoints. When I run "python3 onnx_export.py --model-dir=models", it finds the trained network with the best loss, but then I get "AssertionError: Torch not compiled with CUDA enabled" which is triggered on line 293 of /.local/lib/python3.10/site-packages/torch/cuda/init.py. I have torch 2.2.0 installed, which is supposed to support CUDA. Why should I get that error, and how do I fix it? I've searched the internet and can't find any advice that works for fixing that error.
The text was updated successfully, but these errors were encountered:
I've been following the process in the excellent "github.com/dusty-nv/jetson-inferenceblob/master/docs/pytorch-ssd.md" on my jetson orin nano devkit. I've run the training for detecting an object and it worked well, producing 30 epochs of checkpoints. When I run "python3 onnx_export.py --model-dir=models", it finds the trained network with the best loss, but then I get "AssertionError: Torch not compiled with CUDA enabled" which is triggered on line 293 of /.local/lib/python3.10/site-packages/torch/cuda/init.py. I have torch 2.2.0 installed, which is supposed to support CUDA. Why should I get that error, and how do I fix it? I've searched the internet and can't find any advice that works for fixing that error.
The text was updated successfully, but these errors were encountered: