pytorch_lightning_spells.losses module

Classes:

LabelSmoothCrossEntropy(eps)

Cross Entropy with Label Smoothing

MixupSoftmaxLoss([class_weights, reduction, ...])

A softmax loss that supports MixUp augmentation.

Poly1CrossEntropyLoss([epsilon, reduction, ...])

Poly-1 Cross-Entropy Loss

Poly1FocalLoss(num_classes[, epsilon, ...])

Poly-1 Focal Loss

class pytorch_lightning_spells.losses.LabelSmoothCrossEntropy(eps)[source]

Bases: torch.nn.modules.module.Module

Cross Entropy with Label Smoothing

Reference: wangleiofficial/lable-smoothing-pytorch

The ground truth label will have a value of 1-eps in the target vector.

Parameters

eps (float) – the smoothing factor.

forward(preds, targets, weight=None)[source]
training: bool
class pytorch_lightning_spells.losses.MixupSoftmaxLoss(class_weights=None, reduction='mean', label_smooth_eps=0, poly1_eps=0)[source]

Bases: torch.nn.modules.module.Module

A softmax loss that supports MixUp augmentation.

It requires the input batch to be manipulated into certain format. Works best with MixUpCallback, CutMixCallback, and SnapMixCallback.

Reference: Fast.ai’s implementation

Parameters
  • class_weights (torch.Tensor, optional) – The weight of each class. Defaults to the same weight.

  • reduction (str, optional) – Loss reduction method. Defaults to ‘mean’.

  • label_smooth_eps (float, optional) – If larger than zero, use LabelSmoothedCrossEntropy instead of CrossEntropy. Defaults to 0.

  • poly1_eps (float) –

forward(output, target)[source]

The feed-forward.

The target tensor should have three columns:

  1. the first class.

  2. the second class.

  3. the lambda value to mix the above two classes.

Parameters
  • output (torch.Tensor) – the model output.

  • target (torch.Tensor) – Shaped (batch_size, 3).

Returns

the result loss

Return type

torch.Tensor

training: bool
class pytorch_lightning_spells.losses.Poly1CrossEntropyLoss(epsilon=1.0, reduction='none', weight=None)[source]

Bases: torch.nn.modules.module.Module

Poly-1 Cross-Entropy Loss

Adapted from abhuse/polyloss-pytorch.

Reference: PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions.

Parameters
  • epsilon (float) –

  • reduction (str) –

  • weight (Optional[torch.Tensor]) –

forward(logits, labels, **kwargs)[source]

Forward pass :param logits: tensor of shape [N, num_classes] :param labels: tensor of shape [N] :return: poly cross-entropy loss

training: bool
class pytorch_lightning_spells.losses.Poly1FocalLoss(num_classes, epsilon=1.0, alpha=0.25, gamma=2.0, reduction='none', weight=None, label_is_onehot=False)[source]

Bases: torch.nn.modules.module.Module

Poly-1 Focal Loss

Adapted from abhuse/polyloss-pytorch.

Reference: PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions.

Parameters
  • num_classes (int) –

  • epsilon (float) –

  • alpha (float) –

  • gamma (float) –

  • reduction (str) –

  • weight (torch.Tensor) –

  • label_is_onehot (bool) –

forward(logits, labels)[source]

Forward pass :param logits: output of neural netwrok of shape [N, num_classes] or [N, num_classes, …] :param labels: ground truth tensor of shape [N] or [N, …] with class ids if label_is_onehot was set to False, otherwise

one-hot encoded tensor of same shape as logits

Returns

poly focal loss

training: bool