L1Loss and MSELoss in PyTorch

hyperkai

Super Kai (Kazuya Ito)

Posted on August 17, 2024

L1Loss and MSELoss in PyTorch

Buy Me a Coffee

*Memos:

L1Loss() can get the 0D or more D tensor of the zero or more values(float) computed by L1 Loss(MAE) from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • There is reduction argument for initialization(Optional-Default:'mean'-Type:str). *'none', 'mean' or 'sum' can be selected.
  • There are size_average and reduce argument for initialization but they are deprecated.
  • The 1st argument is input(Required-Type:tensor of float or complex).
  • The 2nd argument is target(Required-Type:tensor of float or complex).
  • input and target should be the same size otherwise there is a warning.
  • Even complex type of input and target tensors return a float tensor.
  • The empty 1D or more D input and target tensor with reduction='mean' return nan.
  • The empty 1D or more D input and target tensor with reduction='sum' return 0.. Image description
import torch
from torch import nn

tensor1 = torch.tensor([ 8., -3., 0.,  1.,  5., -2., -1., 4.])
tensor2 = torch.tensor([-3.,  7., 4., -2., -9.,  6., -8., 5.])
                      # |x-y|
                      # |8.-(-3.)| = 11.
                      # ↓↓
                      # 11.+ 10.+ 4. + 3.+ 14. + 8. + 7.+ 1. = 58.
                      # 58. / 8 = 7.25
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)

l1loss
# L1Loss()

l1loss.reduction
# 'mean'

l1loss = nn.L1Loss(reduction='mean')
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)

l1loss = nn.L1Loss(reduction='sum')
l1loss(input=tensor1, target=tensor2)
# tensor(58.)

l1loss = nn.L1Loss(reduction='none')
l1loss(input=tensor1, target=tensor2)
# tensor([11., 10., 4., 3., 14., 8., 7., 1.])

tensor1 = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]])
tensor2 = torch.tensor([[-3., 7., 4., -2.], [-9., 6., -8., 5.]])

l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)

tensor1 = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]])
tensor2 = torch.tensor([[[-3., 7.], [4., -2.]], [[-9., 6.], [-8., 5.]]])

l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)

tensor1 = torch.tensor([[[8.+0.j, -3.+0.j], [0.+0.j, 1.+0.j]],
                        [[5.+0.j, -2.+0.j], [-1.+0.j, 4.+0.j]]])
tensor2 = torch.tensor([[[-3.+0.j, 7.+0.j], [4.+0.j, -2.+0.j]],
                        [[-9.+0.j, 6.+0.j], [-8.+0.j, 5.+0.j]]])
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)

tensor1 = torch.tensor([])
tensor2 = torch.tensor([])

l1loss = nn.L1Loss(reduction='mean')
l1loss(input=tensor1, target=tensor2)
# tensor(nan)

l1loss = nn.L1Loss(reduction='sum')
l1loss(input=tensor1, target=tensor2)
# tensor(0.)
Enter fullscreen mode Exit fullscreen mode

MSELoss() can get the 0D or more D tensor of the zero or more values(float) computed by L2 Loss(MSE) from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • There is reduction argument for initialization(Optional-Default:'mean'-Type:str). *'none', 'mean' or 'sum' can be selected.
  • There are size_average and reduce argument for initialization but they are deprecated.
  • The 1st argument is input(Required-Type:tensor of float).
  • The 2nd argument is target(Required-Type:tensor of float).
  • input and target should be the same size otherwise there is a warning.
  • The empty 1D or more D input and target tensor with reduction='mean' return nan.
  • The empty 1D or more D input and target tensor with reduction='sum' return 0.. Image description
import torch
from torch import nn

tensor1 = torch.tensor([ 8., -3., 0.,  1.,  5., -2., -1., 4.])
tensor2 = torch.tensor([-3.,  7., 4., -2., -9.,  6., -8., 5.])
                     # (x-y)^2
                     # (8.-(-3.))^2 = 121.
                     # ↓↓↓
                     # 121. 100. 16.   9. 196.  64.  49.  1. = 556.
                     # 556. / 8 = 69.5
mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)

mseloss
# MSELoss()

mseloss.reduction
# 'mean'

mseloss = nn.MSELoss(reduction='mean')
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)

mseloss = nn.MSELoss(reduction='sum')
mseloss(input=tensor1, target=tensor2)
# tensor(556.)

mseloss = nn.MSELoss(reduction='none')
mseloss(input=tensor1, target=tensor2)
# tensor([121., 100., 16., 9., 196., 64., 49., 1.])

tensor1 = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]])
tensor2 = torch.tensor([[-3., 7., 4., -2.], [-9., 6., -8., 5.]])

mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)

tensor1 = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]])
tensor2 = torch.tensor([[[-3., 7.], [4., -2.]], [[-9., 6.], [-8., 5.]]])

mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)

tensor1 = torch.tensor([])
tensor2 = torch.tensor([])

mseloss = nn.MSELoss(reduction='mean')
mseloss(input=tensor1, target=tensor2)
# tensor(nan)

mseloss = nn.MSELoss(reduction='sum')
mseloss(input=tensor1, target=tensor2)
# tensor(0.)
Enter fullscreen mode Exit fullscreen mode
💖 💪 🙅 🚩
hyperkai
Super Kai (Kazuya Ito)

Posted on August 17, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

L1Loss and MSELoss in PyTorch
python L1Loss and MSELoss in PyTorch

August 17, 2024