We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When resuming finetuning, I see that the CycleIterator is forwarded to the dataset where the iteration is to continue from:
litgpt/litgpt/finetune/full.py
Lines 208 to 219 in f334378
However, for pretrain, this does not exist and the training seems to resume from the begining:
litgpt/litgpt/pretrain.py
Lines 217 to 271 in f334378
Can I check in this case, it looks like when resuming, the pretraining will start from the first dataset, and not forwarded?
The text was updated successfully, but these errors were encountered:
The pretraining code uses a stateful dataloader from LitData:
Lines 192 to 198 in f334378
Sorry, something went wrong.
I see, thank you so much!
Sorry, @awaelchli quick question, in that case, is there a reason why it is not implemented for finetuning?
No branches or pull requests
When resuming finetuning, I see that the CycleIterator is forwarded to the dataset where the iteration is to continue from:
litgpt/litgpt/finetune/full.py
Lines 208 to 219 in f334378
However, for pretrain, this does not exist and the training seems to resume from the begining:
litgpt/litgpt/pretrain.py
Lines 217 to 271 in f334378
Can I check in this case, it looks like when resuming, the pretraining will start from the first dataset, and not forwarded?
The text was updated successfully, but these errors were encountered: