Schedulers¶
Factories¶
- eztorch.schedulers.scheduler_factory(optimizer, name, params={}, interval='epoch', num_steps_per_epoch=None, scaler=None, batch_size=None, multiply_lr=1.0)[source]¶
- Scheduler factory. - Parameters:
- optimizer ( - Optimizer) – Optimizer to wrap around.
- name ( - str) – Name of the scheduler to retrieve the scheduler constructor from the- _SCHEDULERSdict.
- params ( - DictConfig, optional) – Scheduler parameters for the scheduler constructor.Default:- {}- interval ( - str, optional) – Interval to call step, if- 'epoch'call`- step()at each epoch.Default:- 'epoch'- num_steps_per_epoch ( - Optional[- int], optional) – Number of steps per epoch. Useful for some schedulers.Default:- None- scaler ( - Optional[- str], optional) – Scaler rule for the initial learning rate.Default:- None- batch_size ( - Optional[- int], optional) – Batch size for the input of the model.Default:- None- multiply_lr ( - float, optional) – Multiply the learning rate by factor. Applied for warmup and minimum learning rate aswell.Default:- 1.0
- Return type:
- Dict[- str,- Any]
- Returns:
- Scheduler configuration for pytorch lightning. - Custom Schedulers¶- Cosine Annealing scheduler¶- class eztorch.schedulers.LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs, max_epochs, warmup_start_lr=0.0, eta_min=0.0, last_epoch=-1)[source]¶
- Sets the learning rate of each parameter group to follow a linear warmup schedule between warmup_start_lr and base_lr followed by a cosine annealing schedule between base_lr and eta_min. - Parameters:
- optimizer ( - Optimizer) – Wrapped optimizer.
- warmup_epochs ( - int) – Maximum number of iterations for linear warmup.
- max_epochs ( - int) – Maximum number of iterations.
- warmup_start_lr ( - float, optional) – Learning rate to start the linear warmup.Default:- 0.0- eta_min ( - float, optional) – Minimum learning rate.Default:- 0.0- last_epoch ( - int, optional) – The index of last epoch.Default:- -1- Warning - It is recommended to call - step()for- LinearWarmupCosineAnnealingLRafter each iteration as calling it after each epoch will keep the starting lr at- warmup_start_lrfor the first epoch which is 0 in most cases.- Warning - passing epoch to - step()is being deprecated and comes with an EPOCH_DEPRECATION_WARNING. It calls the- _get_closed_form_lr()method for this scheduler instead of- get_lr(). Though this does not change the behavior of the scheduler, when passing epoch param to- step(), the user should call the- step()function before calling train and validation methods.- Example::
- >>> layer = nn.Linear(10, 1) >>> optimizer = Adam(layer.parameters(), lr=0.02) >>> scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=40) >>> # >>> # the default case >>> for epoch in range(40): ... # train(...) ... # validate(...) ... scheduler.step() >>> # >>> # passing epoch param case >>> for epoch in range(40): ... scheduler.step(epoch) ... # train(...) ... # validate(...)