Skip to content

An equivalent of torch.cuda.max_memory_allocated for pooled resource #1466

Answered by harrism
masahi asked this question in Q&A
Discussion options

You must be logged in to vote

I've converted this to a discussion. We do support a feature equivalent to torch.cuda.max_memory_allocated: statistics_resource_adaptor. You would create an instance of your resource, e.g. an appropriate rmm::mr::pool_memory_resource, and then construct a statistics_resource_adaptor with the pool resource as upstream.

The statistics_mr_tests show some examples of construction and usage in C++. Let me know if you have more questions about the adaptor.

As @jrhemstad pointed out though, this functionality (and the equivalent you linked from pytorch) is not the same as the amount of available VRAM. This will just show you how much the "warm-up" step successfully allocated.

Replies: 3 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@masahi
Comment options

Answer selected by masahi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
feature request New feature or request ? - Needs Triage Need team to review and classify
3 participants
Converted from issue

This discussion was converted from issue #1465 on February 10, 2024 00:14.