Skip to content

Commit

Permalink
fix: lr_finder epoch loop restarting flag (Lightning-AI#19818)
Browse files Browse the repository at this point in the history
  • Loading branch information
azzhipa committed Apr 25, 2024
1 parent b9680a3 commit 89cd929
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 0 deletions.
2 changes: 2 additions & 0 deletions src/lightning/pytorch/CHANGELOG.md
Expand Up @@ -16,6 +16,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Added `on_exception` hook to `LightningDataModule` ([#19601](https://github.com/Lightning-AI/pytorch-lightning/pull/19601))

- Set `epoch_loop.restarting` to `False` to avoid full validation run after `LearningRateFinder` [#19818](https://github.com/Lightning-AI/pytorch-lightning/issues/19818))

-

### Changed
Expand Down
1 change: 1 addition & 0 deletions src/lightning/pytorch/tuner/lr_finder.py
Expand Up @@ -302,6 +302,7 @@ def _lr_find(
trainer._checkpoint_connector.restore(ckpt_path)
trainer.strategy.remove_checkpoint(ckpt_path)
trainer.fit_loop.restarting = False # reset restarting flag as checkpoint restoring sets it to True
trainer.fit_loop.epoch_loop.restarting = False # reset restarting flag as checkpoint restoring sets it to True
trainer.fit_loop.epoch_loop.val_loop._combined_loader = None

return lr_finder
Expand Down
1 change: 1 addition & 0 deletions tests/tests_pytorch/tuner/test_lr_finder.py
Expand Up @@ -434,6 +434,7 @@ def lr_find(self, trainer, pl_module) -> None:
super().lr_find(trainer, pl_module)
pl_module._expected_max_steps = None
assert not trainer.fit_loop.restarting
assert not trainer.fit_loop.epoch_loop.restarting

def on_train_epoch_start(self, trainer, pl_module):
if trainer.current_epoch in self.milestones or trainer.current_epoch == 0:
Expand Down

0 comments on commit 89cd929

Please sign in to comment.