Skip to content

OutofAi/tiny-cuda-nn-wheels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 

Repository files navigation

Twitter Twitter

Support my channel: Buy Me A Coffee

This repository facilitates the creation of Python wheel files (.whl) from the tiny-cuda-nn project to streamline the installation process on Google Colab and Kaggle. This is to circumvent the 20 minutes build requirement for tiny-cuda-nn on Google colab and Kaggle, when done from the source, to reduce it to few seconds!

(All relevant credits and licenses are attributed to Nvidia. The materials and software licenses from the original tiny-cuda-nn repository are not included in this repository. Please refer to the original project for licensing details.)

The current format for the wheel names includes a release postfix (+arch) that signifies the compute compatibility of the relevant graphics card (i.e. compute compatibility of 8.6 is +arch86), for simplcity you can use the code below for Google Colab for the relevant GPU model, but if you want to run it locally on your machine you can always identify the compute compatibility thorugh this page based of your graphics card https://developer.nvidia.com/cuda-gpus

It also uses a release postfix cuda (+cuda) and (+torch) that signifies the torch and cuda compatibility.

Google Colab Usage:

For T4 GPU

!curl -L "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/1.7.0/tinycudann-1.7+arch75+torch221+cuda121-cp310-cp310-linux_x86_64.whl" -o tinycudann-1.7+arch75+torch221+cuda121-cp310-cp310-linux_x86_64.whl
!pip install tinycudann-1.7+arch75+torch221+cuda121-cp310-cp310-linux_x86_64.whl --force-reinstall
import tinycudann as tcnn

For V100 GPU

!curl -L "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/1.7.0/tinycudann-1.7+arch70+torch221+cuda121-cp310-cp310-linux_x86_64.whl" -o tinycudann-1.7+arch70+torch221+cuda121-cp310-cp310-linux_x86_64.whl
!pip install tinycudann-1.7+arch70+torch221+cuda121-cp310-cp310-linux_x86_64.whl --force-reinstall
import tinycudann as tcnn

For A100 GPU and L4 GPU

!curl -L "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/1.7.0/tinycudann-1.7+arch89+torch221+cuda121-cp310-cp310-linux_x86_64.whl" -o tinycudann-1.7+arch89+torch221+cuda121-cp310-cp310-linux_x86_64.whl
!pip install tinycudann-1.7+arch89+torch221+cuda121-cp310-cp310-linux_x86_64.whl --force-reinstall
import tinycudann as tcnn

Kaggle Notebook Usage:

For T4 GPU

!curl -L "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/Kaggle-T4/tinycudann-1.7-cp310-cp310-linux_x86_64.whl" -o tinycudann-1.7-cp310-cp310-linux_x86_64.whl
!python -m pip install tinycudann-1.7-cp310-cp310-linux_x86_64.whl --force-reinstall --no-cache-dir
import tinycudann as tcnn

For P100 GPU

!curl -L "https://github.com/OutofAi/tiny-cuda-nn-wheels/releases/download/Kaggle-P100/tinycudann-1.7-cp310-cp310-linux_x86_64.whl" -o tinycudann-1.7-cp310-cp310-linux_x86_64.whl
!python -m pip install tinycudann-1.7-cp310-cp310-linux_x86_64.whl --force-reinstall --no-cache-dir
import tinycudann as tcnn

About

This repository facilitates the creation of Python wheel files (.whl) from the tiny-cuda-nn project to streamline the installation process on Google Colab.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published