[v1.12.0] Fix non-reentrant hooks based checkpointing #79490
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Link to landed master PR: #78752
Original commit description:
Fixes the non-reentrant hooks based checkpointing to actually save memory. The issue was that
storage
was a list of autograd saved tensors and we weren't clearing this list out as tensors were accessed, so all activations would remain in memory. Now at the end of the layer's backwards pass, activations will be discarded as expected.Adding unittests to ensure:
Also, this means we can enable non-reentrant based checkpointing in
CheckpointWrapper
, will also add unittests for that.