19 Types of Loss Functions in Pytorch

19 Types of Loss Functions in Pytorch
Source: Algorithm Advancement



This article is about 1800 words, suggested reading time is 8 minutes.
This article introduces you to different types of loss functions.


Source: CSDN-mingo_敏

Address:

https://blog.csdn.net/shanglianlm/article/details/85019768

Basic Usage

criterion = LossCriterion() # The constructor has its own parameters
loss = criterion(x, y) # Call standard also has parameters
19 Types of Loss Functions

1 L1 Loss L1Loss

Calculates the absolute difference between output and target.

torch.nn.L1Loss(reduction='mean')

Parameters:

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

2 Mean Squared Error Loss MSELoss

Calculates the mean squared error between output and target.

torch.nn.MSELoss(reduction='mean')

Parameters:

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

3 Cross Entropy Loss CrossEntropyLoss

Effective for training classification problems with C classes. The optional parameter weight must be a 1D Tensor, weights will be assigned to each category. Very effective for imbalanced training sets.

In multi-class tasks, softmax activation function + cross-entropy loss function is often used because cross-entropy describes the difference between two probability distributions, while the neural network output is a vector, not in the form of a probability distribution. Therefore, the softmax activation function is needed to “normalize” a vector into the form of a probability distribution, and then the cross-entropy loss function is used to calculate the loss.

19 Types of Loss Functions in Pytorch

torch.nn.CrossEntropyLoss(weight=None, ignore_index=-100, reduction='mean')

Parameters:

weight (Tensor, optional) – Custom weight for each class. Must be a Tensor of length C.

ignore_index (int, optional) – Set a target value that will be ignored, thus not affecting the input gradient.

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

4 KL Divergence Loss KLDivLoss

Calculates the KL divergence between input and target. KL divergence can be used to measure the distance between different continuous distributions, which is very effective when performing direct regression on the space of continuous output distributions (discrete sampling).

torch.nn.KLDivLoss(reduction='mean')

Parameters:

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

5 Binary Cross Entropy Loss BCELoss

Cross-entropy calculation function for binary classification tasks. Used to measure reconstruction errors, such as in autoencoders. Note that the target value t[i] ranges from 0 to 1.

torch.nn.BCELoss(weight=None, reduction='mean')

Parameters:

weight (Tensor, optional) – Custom weight for each batch element’s loss. Must be a Tensor of length “nbatch”.

6 BCEWithLogitsLoss

BCEWithLogitsLoss integrates the Sigmoid layer into the BCELoss class. This version is numerically more stable than using a simple Sigmoid layer and BCELoss because merging these two operations into one layer allows for numerical stability using the log-sum-exp trick.

torch.nn.BCEWithLogitsLoss(weight=None, reduction='mean', pos_weight=None)

Parameters:

weight (Tensor, optional) – Custom weight for each batch element’s loss. Must be a Tensor of length “nbatch”.

7 Margin Ranking Loss

torch.nn.MarginRankingLoss(margin=0.0, reduction='mean')

The loss function for each instance in a mini-batch is as follows:19 Types of Loss Functions in Pytorch

Parameters:

margin: default value 0

8 Hinge Embedding Loss

torch.nn.HingeEmbeddingLoss(margin=1.0, reduction='mean')

The loss function for each instance in a mini-batch is as follows:19 Types of Loss Functions in Pytorch

Parameters:

margin: default value 1

9 Multi-label Classification Loss MultiLabelMarginLoss

torch.nn.MultiLabelMarginLoss(reduction='mean')

The loss for each sample in a mini-batch is calculated as follows:19 Types of Loss Functions in Pytorch

10 Smooth L1 Loss SmoothL1Loss

Also known as the Huber loss function.

torch.nn.SmoothL1Loss(reduction='mean')
19 Types of Loss Functions in Pytorch
Where

19 Types of Loss Functions in Pytorch

11 Logistic Loss for Binary Classification SoftMarginLoss

torch.nn.SoftMarginLoss(reduction='mean')

19 Types of Loss Functions in Pytorch

12 Multi-label One-Versus-All Loss MultiLabelSoftMarginLoss

torch.nn.MultiLabelSoftMarginLoss(weight=None, reduction='mean')

19 Types of Loss Functions in Pytorch

13 Cosine Loss CosineEmbeddingLoss

torch.nn.CosineEmbeddingLoss(margin=0.0, reduction='mean')

19 Types of Loss Functions in Pytorch

Parameters:

margin: default value 0

14 Multi-class Hinge Loss MultiMarginLoss

torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, reduction='mean')

19 Types of Loss Functions in Pytorch

Parameters:

p=1 or 2 default value: 1 margin: default value 1

15 Triplet Loss TripletMarginLoss

Similar to Siamese networks, for example: given A, then give B and C, see which one is more similar to A.

19 Types of Loss Functions in Pytorch

torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, reduction='mean')
19 Types of Loss Functions in Pytorch

Where:

19 Types of Loss Functions in Pytorch

16 Connectionist Temporal Classification Loss CTCLoss

CTC connectionist temporal classification loss can automatically align data that is not aligned, mainly used in training serialized data without prior alignment, such as speech recognition, OCR recognition, etc.

torch.nn.CTCLoss(blank=0, reduction='mean')

Parameters:

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

17 Negative Log Likelihood Loss NLLLoss

Negative log likelihood loss. Used for training classification problems with C classes.

torch.nn.NLLLoss(weight=None, ignore_index=-100, reduction='mean')

Parameters:

weight (Tensor, optional) – Custom weight for each class. Must be a Tensor of length C.

ignore_index (int, optional) – Set a target value that will be ignored, thus not affecting the input gradient.

18 NLLLoss2d

Negative log likelihood loss for image inputs. It calculates the negative log likelihood loss for each pixel.

torch.nn.NLLLoss2d(weight=None, ignore_index=-100, reduction='mean')

Parameters:

weight (Tensor, optional) – Custom weight for each class. Must be a Tensor of length C.

reduction – Three values: none: no reduction; mean: returns the average of the loss; sum: returns the sum of the loss. Default: mean.

19 Poisson Negative Log Likelihood Loss PoissonNLLLoss

Negative log likelihood loss with target values from a Poisson distribution:

torch.nn.PoissonNLLLoss(log_input=True, full=False, eps=1e-08, reduction='mean')

Parameters:

log_input (bool, optional) – If set to True, the loss will be calculated according to the formula exp(input) – target * input; if set to False, the loss will be calculated as input – target * log(input + eps).

full (bool, optional) – Whether to compute the full loss, i.e., including the Stirling approximation term target * log(target) – target + 0.5 * log(2 * pi * target).

eps (float, optional) – Default value: 1e-8

References:
Pytorch Loss Function Summary
http://www.voidcn.com/article/p-rtzqgqkz-bpg.html

Editor: Huang Jiyan

19 Types of Loss Functions in Pytorch

Leave a Comment