Super Kai (Kazuya Ito)
Posted on May 26, 2024
*My post explains how to create and acceess a tensor.
__version__
can check PyTorch version as shown below. *__version__
can be used with torch but not with a tensor:
import torch
torch.__version__ # 2.2.1+cu121
cpu.is_available(), cpu.device_count() or cpu.current_device() can check if CPU is available, getting a scalar as shown below:
*Memos:
-
cpu.is_available()
,cpu.device_count()
orcpu.current_device()
can be used with torch but not with a tensor. -
cpu.device_count()
can get the number of CPUs. *It always gets1
: -
cpu.current_device()
can get the index of a currently selected CPU. *It always getscpu
:
import torch
torch.cpu.is_available() # True
torch.cpu.device_count() # 1
torch.cpu.current_device() # cpu
cuda.is_available() or cuda.device_count() can check if GPU(CUDA) is available, getting a scalar as shown below:
*Memos:
-
cuda.is_available()
orcuda.device_count()
can be used withtorch
but not with a tensor. -
cuda.device_count()
can get the number of GPUs.
import torch
torch.cuda.is_available() # True
torch.cuda.device_count() # 1
In addition, you can use cuda.current_device(), cuda.get_device_name() or cuda.get_device_properties(), getting a scalar as shown below:
*Memos:
-
cuda.current_device()
,cuda.get_device_name()
orcuda.get_device_properties()
can be used withtorch
but not with a tensor. -
cuda.current_device()
can get the index of a currently selected GPU. -
cuda.get_device_name()
can get the name of a GPU. *Memos: -
cuda.get_device_properties()
can get the properties of a GPU. *Memos:
torch.cuda.current_device() # 0
torch.cuda.get_device_name()
torch.cuda.get_device_name(device='cuda:0')
torch.cuda.get_device_name(device='cuda')
torch.cuda.get_device_name(device=0)
torch.cuda.get_device_name(device=torch.device(device='cuda:0'))
torch.cuda.get_device_name(device=torch.device(device='cuda'))
torch.cuda.get_device_name(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# Tesla T4
torch.cuda.get_device_properties(device='cuda:0')
torch.cuda.get_device_properties(device='cuda')
torch.cuda.get_device_properties(device=0)
torch.cuda.get_device_properties(device=torch.device(device='cuda:0'))
torch.cuda.get_device_properties(device=torch.device(device='cuda'))
torch.cuda.get_device_properties(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# _CudaDeviceProperties(name='Tesla T4', major=7, minor=5,
# total_memory=15102MB, multi_processor_count=40)
!nvidia-smi can get the information about GPUs as shown below:
!nvidia-smi
Wed May 15 13:18:15 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 56C P0 28W / 70W | 105MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
💖 💪 🙅 🚩
Super Kai (Kazuya Ito)
Posted on May 26, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.