Super Kai (Kazuya Ito)
Posted on August 17, 2024
*Memos:
- My post explains L1 Loss(MAE), L2 Loss(MSE).
- My post explains HuberLoss().
L1Loss() can get the 0D or more D tensor of the zero or more values(float
) computed by L1 Loss(MAE) from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- There is
reduction
argument for initialization(Optional-Default:'mean'
-Type:str
). *'none'
,'mean'
or'sum'
can be selected. - There are
size_average
andreduce
argument for initialization but they are deprecated. - The 1st argument is
input
(Required-Type:tensor
offloat
orcomplex
). - The 2nd argument is
target
(Required-Type:tensor
offloat
orcomplex
). -
input
andtarget
should be the same size otherwise there is a warning. - Even
complex
type ofinput
andtarget
tensors return afloat
tensor. - The empty 1D or more D
input
andtarget
tensor withreduction='mean'
returnnan
. - The empty 1D or more D
input
andtarget
tensor withreduction='sum'
return0.
.
import torch
from torch import nn
tensor1 = torch.tensor([ 8., -3., 0., 1., 5., -2., -1., 4.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6., -8., 5.])
# |x-y|
# |8.-(-3.)| = 11.
# ↓↓
# 11.+ 10.+ 4. + 3.+ 14. + 8. + 7.+ 1. = 58.
# 58. / 8 = 7.25
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)
l1loss
# L1Loss()
l1loss.reduction
# 'mean'
l1loss = nn.L1Loss(reduction='mean')
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)
l1loss = nn.L1Loss(reduction='sum')
l1loss(input=tensor1, target=tensor2)
# tensor(58.)
l1loss = nn.L1Loss(reduction='none')
l1loss(input=tensor1, target=tensor2)
# tensor([11., 10., 4., 3., 14., 8., 7., 1.])
tensor1 = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]])
tensor2 = torch.tensor([[-3., 7., 4., -2.], [-9., 6., -8., 5.]])
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)
tensor1 = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]])
tensor2 = torch.tensor([[[-3., 7.], [4., -2.]], [[-9., 6.], [-8., 5.]]])
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)
tensor1 = torch.tensor([[[8.+0.j, -3.+0.j], [0.+0.j, 1.+0.j]],
[[5.+0.j, -2.+0.j], [-1.+0.j, 4.+0.j]]])
tensor2 = torch.tensor([[[-3.+0.j, 7.+0.j], [4.+0.j, -2.+0.j]],
[[-9.+0.j, 6.+0.j], [-8.+0.j, 5.+0.j]]])
l1loss = nn.L1Loss()
l1loss(input=tensor1, target=tensor2)
# tensor(7.2500)
tensor1 = torch.tensor([])
tensor2 = torch.tensor([])
l1loss = nn.L1Loss(reduction='mean')
l1loss(input=tensor1, target=tensor2)
# tensor(nan)
l1loss = nn.L1Loss(reduction='sum')
l1loss(input=tensor1, target=tensor2)
# tensor(0.)
MSELoss() can get the 0D or more D tensor of the zero or more values(float
) computed by L2 Loss(MSE) from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- There is
reduction
argument for initialization(Optional-Default:'mean'
-Type:str
). *'none'
,'mean'
or'sum'
can be selected. - There are
size_average
andreduce
argument for initialization but they are deprecated. - The 1st argument is
input
(Required-Type:tensor
offloat
). - The 2nd argument is
target
(Required-Type:tensor
offloat
). -
input
andtarget
should be the same size otherwise there is a warning. - The empty 1D or more D
input
andtarget
tensor withreduction='mean'
returnnan
. - The empty 1D or more D
input
andtarget
tensor withreduction='sum'
return0.
.
import torch
from torch import nn
tensor1 = torch.tensor([ 8., -3., 0., 1., 5., -2., -1., 4.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6., -8., 5.])
# (x-y)^2
# (8.-(-3.))^2 = 121.
# ↓↓↓
# 121. 100. 16. 9. 196. 64. 49. 1. = 556.
# 556. / 8 = 69.5
mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)
mseloss
# MSELoss()
mseloss.reduction
# 'mean'
mseloss = nn.MSELoss(reduction='mean')
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)
mseloss = nn.MSELoss(reduction='sum')
mseloss(input=tensor1, target=tensor2)
# tensor(556.)
mseloss = nn.MSELoss(reduction='none')
mseloss(input=tensor1, target=tensor2)
# tensor([121., 100., 16., 9., 196., 64., 49., 1.])
tensor1 = torch.tensor([[8., -3., 0., 1.], [5., -2., -1., 4.]])
tensor2 = torch.tensor([[-3., 7., 4., -2.], [-9., 6., -8., 5.]])
mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)
tensor1 = torch.tensor([[[8., -3.], [0., 1.]], [[5., -2.], [-1., 4.]]])
tensor2 = torch.tensor([[[-3., 7.], [4., -2.]], [[-9., 6.], [-8., 5.]]])
mseloss = nn.MSELoss()
mseloss(input=tensor1, target=tensor2)
# tensor(69.5000)
tensor1 = torch.tensor([])
tensor2 = torch.tensor([])
mseloss = nn.MSELoss(reduction='mean')
mseloss(input=tensor1, target=tensor2)
# tensor(nan)
mseloss = nn.MSELoss(reduction='sum')
mseloss(input=tensor1, target=tensor2)
# tensor(0.)
💖 💪 🙅 🚩
Super Kai (Kazuya Ito)
Posted on August 17, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.