Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PT potential CUDA mem leak? #1478

Open
albertz opened this issue Dec 16, 2023 · 2 comments
Open

PT potential CUDA mem leak? #1478

albertz opened this issue Dec 16, 2023 · 2 comments

Comments

@albertz
Copy link
Member

albertz commented Dec 16, 2023

From log (/work/asr4/zeyer/setups-data/combined/2021-05-31/work/i6_core/returnn/training/ReturnnTrainingJob.XPpeLPG9camH/log.run.1), filtered the CUDA mem usage reports:

Memory usage (cuda): alloc cur 427.8MB alloc peak 427.8MB reserved cur 446.0MB reserved peak 446.0MB
Memory usage (cuda): alloc cur 1.9GB alloc peak 10.7GB reserved cur 11.3GB reserved peak 11.3GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 11.3GB reserved peak 11.3GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.7GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 10.6GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.7GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 10.9GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 1.8GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 2.8GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 2.2GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 10.7GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 1.8GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 2.8GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 2.2GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 11.3GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 1.8GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.9GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 11.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.1GB alloc peak 3.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.1GB alloc peak 2.1GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 11.1GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 3.1GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.6GB alloc peak 11.5GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.6GB alloc peak 2.6GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.5GB alloc peak 3.2GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.5GB alloc peak 2.5GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.3GB alloc peak 11.4GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.3GB alloc peak 2.3GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 3.4GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 11.5GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 2.2GB alloc peak 2.2GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 3.1GB alloc peak 3.7GB reserved cur 13.0GB reserved peak 13.0GB
Memory usage (cuda): alloc cur 3.1GB alloc peak 3.1GB reserved cur 13.0GB reserved peak 13.0GB
...
Memory usage (cuda): alloc cur 4.7GB alloc peak 4.7GB reserved cur 15.8GB reserved peak 15.8GB
Memory usage (cuda): alloc cur 5.4GB alloc peak 14.8GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 5.4GB alloc peak 5.4GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 6.6GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 1.8GB alloc peak 1.8GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 2.4GB alloc peak 14.6GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 2.4GB alloc peak 2.4GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 3.3GB alloc peak 3.8GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 3.3GB alloc peak 3.3GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 14.8GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 3.6GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 4.5GB alloc peak 5.0GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 4.5GB alloc peak 4.5GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 4.6GB alloc peak 15.0GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 4.6GB alloc peak 4.6GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 5.5GB alloc peak 6.0GB reserved cur 16.1GB reserved peak 16.1GB
Memory usage (cuda): alloc cur 5.5GB alloc peak 5.5GB reserved cur 16.1GB reserved peak 16.1GB
...
Memory usage (cuda): alloc cur 8.3GB alloc peak 8.9GB reserved cur 19.1GB reserved peak 19.1GB
Memory usage (cuda): alloc cur 8.3GB alloc peak 8.3GB reserved cur 19.1GB reserved peak 19.1GB
Memory usage (cuda): alloc cur 5.6GB alloc peak 18.7GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 5.6GB alloc peak 5.6GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 6.5GB alloc peak 7.0GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 6.5GB alloc peak 6.5GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 3.8GB alloc peak 17.9GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 3.8GB alloc peak 3.8GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 4.7GB alloc peak 5.3GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 4.7GB alloc peak 4.7GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 17.9GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 2.8GB alloc peak 3.4GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 2.8GB alloc peak 2.8GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 7.8GB alloc peak 16.5GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 7.8GB alloc peak 7.8GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 8.6GB alloc peak 9.2GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 8.6GB alloc peak 8.6GB reserved cur 20.1GB reserved peak 20.1GB
Memory usage (cuda): alloc cur 5.1GB alloc peak 18.8GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 5.1GB alloc peak 5.1GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 5.9GB alloc peak 6.5GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 5.9GB alloc peak 5.9GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 2.7GB alloc peak 18.3GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 2.7GB alloc peak 2.7GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 4.2GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 3.6GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 8.5GB alloc peak 17.1GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 8.5GB alloc peak 8.5GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 9.4GB alloc peak 9.9GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 9.4GB alloc peak 9.4GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 6.0GB alloc peak 18.8GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 6.0GB alloc peak 6.0GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 6.8GB alloc peak 7.4GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 6.8GB alloc peak 6.8GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 18.3GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 3.6GB alloc peak 3.6GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 4.5GB alloc peak 5.0GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 4.5GB alloc peak 4.5GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 9.4GB alloc peak 18.4GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 9.4GB alloc peak 9.4GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 10.2GB alloc peak 10.8GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 10.2GB alloc peak 10.2GB reserved cur 20.4GB reserved peak 20.4GB
Memory usage (cuda): alloc cur 6.3GB alloc peak 19.2GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 6.3GB alloc peak 6.3GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 7.1GB alloc peak 7.7GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 7.1GB alloc peak 7.1GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 3.2GB alloc peak 19.0GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 3.2GB alloc peak 3.2GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 4.0GB alloc peak 4.6GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 4.0GB alloc peak 4.0GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 9.0GB alloc peak 17.8GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 9.0GB alloc peak 9.0GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 9.9GB alloc peak 10.5GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 9.9GB alloc peak 9.9GB reserved cur 20.7GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 5.7GB alloc peak 19.4GB reserved cur 20.5GB reserved peak 20.7GB
Memory usage (cuda): alloc cur 5.7GB alloc peak 5.7GB reserved cur 20.5GB reserved peak 20.5GB
Memory usage (cuda): alloc cur 6.6GB alloc peak 7.2GB reserved cur 20.5GB reserved peak 20.5GB
Memory usage (cuda): alloc cur 6.6GB alloc peak 6.6GB reserved cur 20.5GB reserved peak 20.5GB
Memory usage (cuda): alloc cur 2.5GB alloc peak 19.3GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 2.5GB alloc peak 2.5GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 3.4GB alloc peak 3.9GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 3.4GB alloc peak 3.4GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 8.2GB alloc peak 16.7GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 8.2GB alloc peak 8.2GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 9.1GB alloc peak 9.7GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 9.1GB alloc peak 9.1GB reserved cur 20.6GB reserved peak 20.6GB

And then OOM on the A10 (24GB, effectively 22GB):

  File "/u/zeyer/setups/combined/2021-05-31/recipe/i6_experiments/users/zeyer/experiments/exp2023_04_25_rf/chunked_aed_import.py", line 695, in from_scr
atch_training.<locals>._body
    line: aux_enc: Tensor = collected_outputs[str(layer_idx - 1)]
    locals:
      aux_enc = <not found>
      Tensor = <global> <class 'returnn.tensor.tensor.Tensor'>
      collected_outputs = <not found>
      str = <builtin> <class 'str'>
      layer_idx = <not found>
  File "/u/zeyer/setups/combined/2021-05-31/recipe/i6_experiments/users/zeyer/experiments/exp2023_04_25_rf/chunked_aed_import.py", line 568, in Model.lo
op_step
    line: "att",
    locals:
       no locals
  File "/u/zeyer/setups/combined/2021-05-31/tools/returnn/returnn/frontend/array_.py", line 545, in gather
    line: return source._raw_backend.gather(source, indices=indices, axis=axis, clip_to_valid=clip_to_valid)
    locals:
      source = <local> Tensor{'add', [B?,'⌈((-199+time)+-200)/19200⌉'[B?],'sliced-chunk-size'(20),F|'enc_key_total_dim'(1024)]}
      source._raw_backend = <local> <class 'returnn.torch.frontend._backend.TorchBackend'>
      source._raw_backend.gather = <local> <function TorchBackend.gather at 0x7fcdac783100>
      indices = <local> Tensor{'where', [B?], dtype='int32', sparse_dim=Dim{'⌈((-199+time)+-200)/19200⌉'[B?]}}
      axis = <local> Dim{'⌈((-199+time)+-200)/19200⌉'[B?]}
      clip_to_valid = <local> True
  File "/u/zeyer/setups/combined/2021-05-31/tools/returnn/returnn/torch/frontend/_backend.py", line 914, in TorchBackend.gather
    line: out_raw = torch.gather(source.raw_tensor, dim=axis_int, index=indices.raw_tensor.type(torch.int64))
    locals:
      out_raw = <not found>
      torch = <global> <module 'torch' from '/work/tools/users/zeyer/py-envs/py3.11-torch2.1/lib/python3.11/site-packages/torch/__init__.py'>
      torch.gather = <global> <built-in method gather of type object at 0x7fce1b1aeaa0>
      source = <local> Tensor{'add', [B?,'⌈((-199+time)+-200)/19200⌉'[B?],'sliced-chunk-size'(20),F|'enc_key_total_dim'(1024)]}
      source.raw_tensor = <local> tensor[11, 12, 20, 1024] n=2703360 (10Mb) x∈[-6.181, 6.561] μ=-0.030 σ=1.284 grad AddBackward0 cuda:0
      dim = <local> Dim{'⌈((-199+time)+-200)/19200⌉'[B?]}
      axis_int = <local> 1
      index = <not found> 
      indices = <local> Tensor{'where', [B?,'dummy'(1),'sliced-chunk-size'(20),'enc_key_total_dim'(1024)], dtype='int32', sparse_dim=Dim{'⌈((-199+time)+-200)/19200⌉'[B?]}} 
      indices.raw_tensor = <local> !OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacty of 22.03 GiB of which 896.00 KiB is free. Including non-PyTorch memory, this process has 22.03 GiB memory in use. Of the allocated memory 19.05 GiB is allocated by PyTorch, and 1.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
      indices.raw_tensor.type = <local> <built-in method type of Tensor object at 0x7fc5f30dbef0>
      torch.int64 = <global> torch.int64 
OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacty of 22.03 GiB of which 896.00 KiB is free. Including non-PyTorch memory, this process has 22.03 GiB memory in use. Of the allocated memory 19.05 GiB is allocated by PyTorch, and 1.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

From the very first output, you see that the model params take 427.8MB ("alloc cur"). Then it seems somewhat constant at around 2GB, maybe for some caches for convolution etc (maybe see #1450).

The "alloc cur" increases then, and fluctuates, but also sometimes still has the old low value of around 2GB. This indicates that maybe the Python GC has not yet freed everything, and maybe a gc.collect() would free this (but not sure, not tested).

But it's strange that overall the magnitude of the fluctuations increase more and more. That also causes the reserved memory to increase. Maybe it's just then bad memory fragmentation?

@albertz
Copy link
Member Author

albertz commented Dec 16, 2023

Note, in the PYTORCH_CUDA_ALLOC_CONF doc, maybe the expandable_segments option might help us?

If set to True, this setting instructs the allocator to create CUDA allocations that can later be expanded to better handle cases where a job changing allocation sizes frequently, such as having a changing batch size.

@albertz
Copy link
Member Author

albertz commented Dec 17, 2023

I introduced the option reset_dev_memory_caches, which calls gc.collect() and then torch.cuda.empty_cache().

Before (via /work/asr4/zeyer/setups-data/combined/2021-05-31/work/i6_core/returnn/training/ReturnnTrainingJob.yr9RPZ4KpDXG/log.run.1, alias/exp2023_04_25_rf/chunked_aed_import/chunk-C20-R15-H2-bs22k/train):

Memory usage (cuda): alloc cur 427.8MB alloc peak 427.8MB reserved cur 446.0MB reserved peak 446.0MB
Memory usage (cuda): alloc cur 1.9GB alloc peak 15.1GB reserved cur 17.4GB reserved peak 17.4GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 1.9GB reserved cur 17.4GB reserved peak 17.4GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.9GB reserved cur 18.4GB reserved peak 18.4GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 18.4GB reserved peak 18.4GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 15.1GB reserved cur 18.4GB reserved peak 18.4GB
Memory usage (cuda): alloc cur 2.0GB alloc peak 2.0GB reserved cur 18.4GB reserved peak 18.4GB
...
Memory usage (cuda): alloc cur 5.0GB alloc peak 5.7GB reserved cur 20.3GB reserved peak 20.3GB
Memory usage (cuda): alloc cur 5.0GB alloc peak 5.0GB reserved cur 20.3GB reserved peak 20.3GB
Memory usage (cuda): alloc cur 4.9GB alloc peak 19.4GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 4.9GB alloc peak 4.9GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 5.7GB alloc peak 6.5GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 5.7GB alloc peak 5.7GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 5.5GB alloc peak 18.9GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 5.5GB alloc peak 5.5GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 6.4GB alloc peak 7.2GB reserved cur 20.6GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 6.4GB alloc peak 6.4GB reserved cur 20.6GB reserved peak 20.6GB

And then:
OutOfMemoryError: CUDA out of memory. Tried to allocate 178.00 MiB. GPU 0 has a total capacty of 22.03 GiB of which 102.88 MiB is free. Including non-PyTorch memory, this process has 21.93 GiB memory in use. Of the allocated memory 19.09 GiB is allocated by PyTorch, and 1.48 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF in epoch 46.

After:

Memory usage (cuda): alloc cur 427.8MB alloc peak 427.8MB reserved cur 446.0MB reserved peak 446.0MB
Memory usage (cuda): alloc cur 1.8GB alloc peak 15.2GB reserved cur 17.6GB reserved peak 17.6GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 2.7GB reserved peak 2.7GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 2.9GB reserved cur 9.3GB reserved peak 9.3GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 2.7GB reserved peak 2.7GB
Memory usage (cuda): alloc cur 1.9GB alloc peak 14.9GB reserved cur 17.3GB reserved peak 17.3GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 4.9GB reserved peak 4.9GB
Memory usage (cuda): alloc cur 2.1GB alloc peak 3.0GB reserved cur 11.2GB reserved peak 11.2GB
...
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 2.7GB reserved peak 2.7GB
Memory usage (cuda): alloc cur 2.6GB alloc peak 3.3GB reserved cur 9.4GB reserved peak 9.4GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 2.7GB reserved peak 2.7GB
Memory usage (cuda): alloc cur 2.5GB alloc peak 18.3GB reserved cur 16.4GB reserved peak 20.5GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 3.6GB reserved peak 3.6GB
Memory usage (cuda): alloc cur 2.6GB alloc peak 3.4GB reserved cur 10.2GB reserved peak 10.2GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 3.6GB reserved peak 3.6GB
Memory usage (cuda): alloc cur 2.3GB alloc peak 18.6GB reserved cur 20.4GB reserved peak 20.6GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 3.9GB reserved peak 3.9GB
Memory usage (cuda): alloc cur 2.6GB alloc peak 3.3GB reserved cur 10.2GB reserved peak 10.2GB
Memory usage (cuda): alloc cur 1.7GB alloc peak 1.7GB reserved cur 3.9GB reserved peak 3.9GB

And then:
OutOfMemoryError: CUDA out of memory. Tried to allocate 178.00 MiB. GPU 0 has a total capacty of 22.03 GiB of which 120.88 MiB is free. Including non-PyTorch memory, this process has 21.91 GiB memory in use. Of the allocated memory 17.87 GiB is allocated by PyTorch, and 2.68 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF in epoch 40 (earlier?).

So the option reset_dev_memory_caches seems to do sth, the epoch initial alloc cur seems correct now. But it causes (maybe unluckily) even more fragmentation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant