Schedulers#

Factories#

eztorch.schedulers.scheduler_factory(optimizer, name, params={}, interval='epoch', num_steps_per_epoch=None, scaler=None, batch_size=None, multiply_lr=1.0)[source]#

Scheduler factory.

Parameters:
  • optimizer (

    Default:) – Optimizer to wrap around.

  • name (

    Default:) – Name of the scheduler to retrieve the scheduler constructor from the _SCHEDULERS dict.

  • params (

    Default:, optional) – Scheduler parameters for the scheduler constructor.
    Default: {}

  • interval (

    Default:, optional) – Interval to call step, if 'epoch' call` step() at each epoch.
    Default: 'epoch'

  • num_steps_per_epoch (

    Default:, optional) – Number of steps per epoch. Useful for some schedulers.
    Default: None

  • scaler (

    Default:, optional) – Scaler rule for the initial learning rate.
    Default: None

  • batch_size (

    Default:, optional) – Batch size for the input of the model.
    Default: None

  • multiply_lr (

    Default:, optional) – Multiply the learning rate by factor. Applied for warmup and minimum learning rate aswell.
    Default: 1.0

Return type:

Default:

Returns:

Scheduler configuration for pytorch lightning.

Custom Schedulers#

Cosine Annealing scheduler#

class eztorch.schedulers.LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs, max_epochs, warmup_start_lr=0.0, eta_min=0.0, last_epoch=-1)[source]#

Sets the learning rate of each parameter group to follow a linear warmup schedule between warmup_start_lr and base_lr followed by a cosine annealing schedule between base_lr and eta_min.

Parameters:
  • optimizer (

    Default:) – Wrapped optimizer.

  • warmup_epochs (

    Default:) – Maximum number of iterations for linear warmup.

  • max_epochs (

    Default:) – Maximum number of iterations.

  • warmup_start_lr (

    Default:, optional) – Learning rate to start the linear warmup.
    Default: 0.0

  • eta_min (

    Default:, optional) – Minimum learning rate.
    Default: 0.0

  • last_epoch (

    Default:, optional) – The index of last epoch.
    Default: -1

Warning

It is recommended to call step() for LinearWarmupCosineAnnealingLR after each iteration as calling it after each epoch will keep the starting lr at warmup_start_lr for the first epoch which is 0 in most cases.

Warning

passing epoch to step() is being deprecated and comes with an EPOCH_DEPRECATION_WARNING. It calls the _get_closed_form_lr() method for this scheduler instead of get_lr(). Though this does not change the behavior of the scheduler, when passing epoch param to step(), the user should call the step() function before calling train and validation methods.

Example::
>>> layer = nn.Linear(10, 1)
>>> optimizer = Adam(layer.parameters(), lr=0.02)
>>> scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=10, max_epochs=40)
>>> #
>>> # the default case
>>> for epoch in range(40):
...     # train(...)
...     # validate(...)
...     scheduler.step()
>>> #
>>> # passing epoch param case
>>> for epoch in range(40):
...     scheduler.step(epoch)
...     # train(...)
...     # validate(...)