pytorch_lightning_spells.lr_schedulers module

Classes:

BaseLRScheduler(optimizer[, last_epoch, verbose])

CosineAnnealingScheduler(optimizer, T_max[, ...])

ExponentialLR(optimizer, min_lr_ratio, ...)

Exponentially increases the learning rate between two boundaries over a number of iterations.

LinearLR(optimizer, min_lr_ratio, total_epochs)

Linearly increases or decrease the learning rate between two boundaries over a number of iterations.

MultiStageScheduler(schedulers, start_at_epochs)

class pytorch_lightning_spells.lr_schedulers.BaseLRScheduler(optimizer, last_epoch=- 1, verbose=False)[source]

Bases: torch.optim.lr_scheduler._LRScheduler

clear_optimizer()[source]
switch_optimizer(optimizer)[source]
class pytorch_lightning_spells.lr_schedulers.CosineAnnealingScheduler(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)[source]

Bases: torch.optim.lr_scheduler.CosineAnnealingLR, pytorch_lightning_spells.lr_schedulers.BaseLRScheduler

class pytorch_lightning_spells.lr_schedulers.ExponentialLR(optimizer, min_lr_ratio, total_epochs, last_epoch=- 1)[source]

Bases: pytorch_lightning_spells.lr_schedulers.BaseLRScheduler

Exponentially increases the learning rate between two boundaries over a number of iterations.

Mainly used by LR finders.

__init__(optimizer, min_lr_ratio, total_epochs, last_epoch=- 1)[source]

Initialize a scheduler.

Parameters
  • optimizer (Union[torch.optim.Optimizer, apex.fp16_utils.fp16_optimizer.FP16_Optimizer]) –

  • min_lr_ratio (float) – min_lr_ratio * base_lr will be the starting learning rate.

  • total_epochs (int) – the total number of “steps” in this run.

  • last_epoch (int, optional) – the index of last epoch, by default -1.

get_lr()[source]
class pytorch_lightning_spells.lr_schedulers.LinearLR(optimizer, min_lr_ratio, total_epochs, upward=True, last_epoch=- 1)[source]

Bases: pytorch_lightning_spells.lr_schedulers.BaseLRScheduler

Linearly increases or decrease the learning rate between two boundaries over a number of iterations.

Parameters
  • optimizer (torch.optim.optimizer.Optimizer) –

  • min_lr_ratio (float) –

  • total_epochs (float) –

  • upward (bool) –

  • last_epoch (int) –

__init__(optimizer, min_lr_ratio, total_epochs, upward=True, last_epoch=- 1)[source]

Initialize a scheduler.

Parameters
  • optimizer (Union[torch.optim.Optimizer, apex.fp16_utils.fp16_optimizer.FP16_Optimizer]) –

  • min_lr_ratio (float) – min_lr_ratio * base_lr will be the starting learning rate.

  • total_epochs (float) – the total number of “steps” in this run.

  • upward (bool) – whether the learning rate goes up or down. Defaults to True.

  • last_epoch (int) – the index of last epoch. Defaults to -1.

get_lr()[source]
class pytorch_lightning_spells.lr_schedulers.MultiStageScheduler(schedulers, start_at_epochs, last_epoch=- 1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler

Parameters
  • schedulers (Sequence) –

  • start_at_epochs (Sequence[int]) –

  • last_epoch (int) –

__init__(schedulers, start_at_epochs, last_epoch=- 1)[source]
Parameters
  • schedulers (Sequence) –

  • start_at_epochs (Sequence[int]) –

  • last_epoch (int) –

clear_optimizer()[source]
load_state_dict(state_dict)[source]

Loads the schedulers state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

state_dict()[source]

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer.

step(epoch=None)[source]
switch_optimizer(optimizer)[source]