Super Kai (Kazuya Ito)
Posted on July 10, 2024
*Memos:
- [Warning]-normal() is really tricky.
- You can use manual_seed() with normal(). *My post explains
manual_seed()
. - My post explains rand() and rand_like().
- My post explains randn() and randn_like().
- My post explains randint() and randperm().
normal() can create the 0D or more D tensor of zero or more random floating-point numbers or complex numbers from normal distribution as shown below:
*Memos:
-
normal()
can be used with torch but not with a tensor. - The 1st argument with
torch
ismean
(Required-Type:float
orcomplex
ortensor
offloat
orcomplex
): *Memos:- Setting
mean
withoutstd
andsize
istensor
offloat
orcomplex
. - Setting
mean
andstd
withoutsize
isfloat
ortensor
offloat
orcomplex
. - Setting
mean
,std
andsize
isfloat
ortensor
offloat
. *The 0D tensor offloat
also works.
- Setting
- The 2nd argument with
torch
isstd
(Optional-Type:float
ortensor
offloat
): *Memos:- It is standard deviation.
- It must be greater than or equal to 0.
- Setting
std
withoutsize
isfloat
ortensor
offloat
. - Setting
std
withsize
isfloat
ortensor
offloat
. *The 0D tensor offloat
also works.
- The 3rd argument with
torch
issize
(Optional-Type:tuple
ofint
,list
ofint
or size()): *Memos:- It must be used with
std
. - It must not be negative.
- It must be used with
- There is
dtype
argument withtorch
(Optional-Default:None
-Type:dtype): *Memos:- If it's
None
, it's inferred frommean
orstd
, then for floating-point numbers, get_default_dtype() is used. *My post explainsget_default_dtype()
and set_default_dtype(). -
dtype=
must be used. -
My post explains
dtype
argument.
- If it's
- There is
device
argument withtorch
(Optional-Default:None
-Type:str
,int
or device()): *Memos:- If it's
None
, get_default_device() is used. *My post explainsget_default_device()
and set_default_device(). -
device=
must be used. -
My post explains
device
argument.
- If it's
- There is
requires_grad
argument withtorch
(Optional-Default:False
-Type:bool
): *Memos:-
requires_grad=
must be used. -
My post explains
requires_grad
argument.
-
- There is
out
argument withtorch
(Optional-Default:None
-Type:tensor
): *Memos:-
out=
must be used. -
My post explains
out
argument.
-
import torch
torch.normal(mean=torch.tensor([1., 2., 3.]))
# tensor([1.2713, 0.7271, 3.5027])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]))
# tensor([1.1918-0.9001j, 2.3555+0.2956j, 2.5479-0.4672j])
torch.normal(mean=torch.tensor([1., 2., 3.]),
std=torch.tensor([4., 5., 6.]))
# tensor([2.0851, -4.3646, 6.0162])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]),
std=torch.tensor([4., 5., 6.]))
# tensor([1.7673-3.6004j, 3.7773+1.4781j, 0.2872-2.8034j])
torch.normal(mean=torch.tensor([1., 2., 3.]), std=4.)
# tensor([2.0851, -3.0917, 5.0108])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]), std=4.)
# tensor([1.7673-3.6004j, 3.4218+1.1825j, 1.1914-1.8689j])
torch.normal(mean=1., std=torch.tensor([4., 5., 6.]))
# tensor([2.0851, -5.3646, 4.0162])
torch.normal(mean=1., std=4., size=())
torch.normal(mean=1., std=4., size=torch.tensor(8).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=())
# tensor(2.0851)
torch.normal(mean=1., std=4., size=(3,))
torch.normal(mean=1., std=4., size=torch.tensor([8, 3, 6]).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3,))
# tensor([2.0851, -4.0917, 3.0108])
torch.normal(mean=1., std=4., size=(3, 2))
torch.normal(mean=1., std=4.,
size=torch.tensor([[8, 3], [6, 0], [2, 9]]).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3, 2))
# tensor([[2.0851, -4.0917],
# [3.0108, 2.6723],
# [-1.5577, -1.6431]])
torch.normal(mean=1., std=4., size=(3, 2, 4))
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3, 2, 4))
# tensor([[[-3.7568, 6.5729, 9.4236, -0.4183],
# [2.4840, 5.3827, 9.5657, 1.5267]],
# [[8.0575, -0.5000, -0.3416, 5.3502],
# [-4.3835, 1.6974, 2.6226, -1.9671]],
# [[1.1422, 1.7790, 4.5886, -0.3273],
# [2.8941, -3.3046, 1.1336, 2.8792]]])
💖 💪 🙅 🚩
Super Kai (Kazuya Ito)
Posted on July 10, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.