Configuring OneCycleLR from yaml file lightning CLI #19689
Replies: 1 comment
-
It would be nice if this were possible by doing: class MyCLI(LightningCLI):
def add_arguments_to_parser(self, parser):
parser.link_arguments(
"trainer.estimated_stepping_batches",
"model.scheduler.init_args.total_steps",
apply_on="instantiate",
) Unfortunately this is not possible because the trainer is instantiated by the Though, I think there could be a non-optimal workaround. Something like: class MyModel(LightningModule):
def __init__(
self,
optimizer: OptimizerCallable = torch.optim.Adam,
scheduler: LRSchedulerCallable = torch.optim.lr_scheduler.ConstantLR,
):
super().__init__()
self.optimizer = optimizer
self.scheduler = scheduler
def configure_optimizers(self):
optimizer = self.optimizer(self.parameters())
scheduler = self.scheduler(optimizer)
if isinstance(scheduler, torch.optim.lr_scheduler.OneCycleLR):
scheduler.total_steps = self.trainer.estimated_stepping_batches
return {"optimizer": optimizer, "lr_scheduler": scheduler}
if __name__ == "__main__":
cli = LightningCLI(MyModel, auto_configure_optimizers=False) Note that I haven't tested it. It is just to illustrate the idea. Not optimal because when wanting to use |
Beta Was this translation helpful? Give feedback.
-
Is there a way to configure OneCycleLR using yaml config files and Lightning CLI?
The problem is that the argument of OneCycleLR on initialization is the total number of steps, which I usually initialize using
self.trainer.estimated.stepping.batches
insideconfigure_optimizers
inside lightning module. I don't see how this could be done using CLI and config files.For the reference, I implemented CLI as described here
Beta Was this translation helpful? Give feedback.
All reactions