You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I would like to ask for RMM support of CUDA compressed memory, a feature available in the A100 and H100 for both DRAM and the L2 cache that allows effective bandwidth to be increased by compressing the data.
CUDA Python has access to the CUDA Driver API but I cannot figure out how to allocate CUDA Runtime objects such as PyTorch Tensors.
Additional context
Due to the memory bottleneck faced by large language models during inference, compressing the data as it flows through the DRAM/HBM and L2 cache is the simplest method of improving throughput. Also, with the help of filtering techniques such as byte shuffling and bit shuffling, the compression efficiency of even simple compression algorithms can be improved massively.
In light of the massive interest in deploying LLMs in the real world, I believe that having this feature would be very helpful.
The text was updated successfully, but these errors were encountered:
Compressed memory is currently only available via using cuMemCreate directly. RMM doesn't have any allocator implementations that use cuMemCreate directly and it would be a significant amount of work to implement one.
The cudaMemPoolProps struct passed to cudaMemPoolCreate does not currently have an option to enable compression, but it may be possible to add it. I will explore this option internally.
Is your feature request related to a problem? Please describe.
I would like to ask for RMM support of CUDA compressed memory, a feature available in the A100 and H100 for both DRAM and the L2 cache that allows effective bandwidth to be increased by compressing the data.
Describe the solution you'd like
Unfortunately, there is no way of accessing compressed memory from the standard CUDA Runtime API and users must use the CUDA Device API to allocate compressed memory. See page 30 in the PDF below.
https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s21819-optimizing-applications-for-nvidia-ampere-gpu-architecture.pdf
Describe alternatives you've considered
CUDA Python has access to the CUDA Driver API but I cannot figure out how to allocate CUDA Runtime objects such as PyTorch Tensors.
Additional context
Due to the memory bottleneck faced by large language models during inference, compressing the data as it flows through the DRAM/HBM and L2 cache is the simplest method of improving throughput. Also, with the help of filtering techniques such as byte shuffling and bit shuffling, the compression efficiency of even simple compression algorithms can be improved massively.
In light of the massive interest in deploying LLMs in the real world, I believe that having this feature would be very helpful.
The text was updated successfully, but these errors were encountered: