How to manually adjust learning rate -- module.optimizers vs trainer.optimizers? #19551
Unanswered
daniel-layton-wright
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
I figured out that the trainer optimizers are reloaded after the module's on_fit_start is called |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm having trouble manually adjusting the learning rate in my module when restoring from a checkpoint. The context is that my first epoch I use as sort of a 'dry run' test to make sure everything is working, e.g., that .backward works on the given hardware etc. But I don't want to actually update the model weights with this dummy data. I tried adding this in my on_fit_start
But it doesn't work. There's no error thrown or anything, but I can see that the loss is actually consistently improving.
But it works when starting fresh, not from a checkpoint.
I figured out that the trainer.optimizers and the module.optimizers do not match when loading a checkpoint. What is the reason for that?
However, even when I change the lr in trainer.optimizers, it still does not work.
Where is the learning rate that is actually used in optimization steps?
Beta Was this translation helpful? Give feedback.
All reactions