Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnx_export.py says torch not compiled with CUDA, but it should be #1848

Open
whutchi opened this issue May 18, 2024 · 0 comments
Open

onnx_export.py says torch not compiled with CUDA, but it should be #1848

whutchi opened this issue May 18, 2024 · 0 comments

Comments

@whutchi
Copy link

whutchi commented May 18, 2024

I've been following the process in the excellent "github.com/dusty-nv/jetson-inferenceblob/master/docs/pytorch-ssd.md" on my jetson orin nano devkit. I've run the training for detecting an object and it worked well, producing 30 epochs of checkpoints. When I run "python3 onnx_export.py --model-dir=models", it finds the trained network with the best loss, but then I get "AssertionError: Torch not compiled with CUDA enabled" which is triggered on line 293 of /.local/lib/python3.10/site-packages/torch/cuda/init.py. I have torch 2.2.0 installed, which is supposed to support CUDA. Why should I get that error, and how do I fix it? I've searched the internet and can't find any advice that works for fixing that error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant