Skip to content

How to continue training with a different learning rate #6494

Discussion options

You must be logged in to vote

I opened this as an issue. However (as you'll see in the discussion there), it turns out that in my case there was no problem - the .load_from_checkpoint() method works as expected. I probably just made a different mistake which caused my loss to (immediately) blow up after resuming training, which I interpreted as arising from the issue that you described of the weights being overwritten with a new initialization. I shouldn't have jumped to that conclusion so quickly as I didn't actually verify that the weights were different in my case. I just tried it again and it works fine now.

In your case, it looks like you're using the wrong syntax, which I hadn't spotted but another user did - pl…

Replies: 3 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@awaelchli
Comment options

Answer selected by awaelchli
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment