Waji
Posted on February 25, 2023
Introduction
RAID (redundant array of independent disks) isย a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure
โจ There are different RAID Levels:
๐ RAID Level 0 (Stripe Volume)
Combines empty spaces from two or more disks (up to a maximum of 32) into a single volume
When data is written to the stripe volume, it is evenly distributed across all disks in 64KB blocks
Provides improved performance, but no fault tolerance
Performance improves with more disks
๐ RAID Level 1 (Mirror Volume)
Requires an even number of disks
Mirrors an existing simple volume
Provides fault tolerance
Available disk capacity is half the total disk capacity
๐ RAID Level 5 (Stripe with Parity)
Requires a minimum of three disks
Provides fault tolerance with a single additional disk
Uses parity bits for error checking
Available disk capacity is the total disk capacity minus one disk's capacity
๐ RAID Level 6 (Dual Parity)
Requires a minimum of four disks
Can recover from the failure of two or more HDDs, which is a weakness of RAID 5
Uses dual parity bits for error checking
Available disk capacity is the total disk capacity minus two disks' capacity
๐ RAID Level 1+0
Requires a minimum of four disks
Forms a RAID 1 configuration and then reconfigures as RAID 0
Provides excellent reliability and performance but is less efficient
Available disk capacity is half the total disk capacity
Simple hands on to test every level
I will be using Linux CentOS7 installed in my VM.
We can first check,
[root@Linux-1 ~]# rpm -qa | grep mdadm
As there is nothing, we will install this package using โyumโ,
[root@Linux-1 ~]# yum -y install mdadm
Now,
[root@Linux-1 ~]# rpm -qa | grep mdadm
mdadm-4.1-9.el7_9.x86_64
Raid - 0 ๊ตฌ์ฑ
๐ Raid - 0 ์ Test ํ๊ธฐ ์ํด์ ์๋ก์ด HDD 2๊ฐ๋ฅผ ์ถ๊ฐ
*์ฐ๋ฆฌ๋ ์ด๋ฏธ 2๊ฐ๊ฐ ์๊ธฐ๋๋ฌธ์ ๋ฐ๋ก ์์!
We will first make raid system partition for /dev/sdb and /dev/sdc,
[root@Linux-1 ~]# fdisk /dev/sdb
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
# Checking with p
Command (m for help): p
Device Boot Start End Blocks Id System
/dev/sdb1 2048 2097151 1047552 fd Linux raid
# Same for /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdc
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
# Checking with p
Command (m for help): p
Device Boot Start End Blocks Id System
/dev/sdc1 2048 2097151 1047552 fd Linux raid
We will save and exit using โwโ.
๐ Linux์์๋ ๊ธฐ๋ณธ์ ์ ์ฅ์น๋ฅผ ์ปจํธ๋กค ํ๊ธฐ ์ํด ์ฅ์น ํ์ผ์ด ์์ด์ผ ํ๋ค, ํ์ง๋ง ํ์ฌ RAID๊ตฌ์ฑ ์ RAID์ ๊ดํ ์ฅ์น ํ์ผ์ด ์์ผ๋ฏ๋ก
์ฅ์น ํ์ผ์ ์๋์ผ๋ก ์์ฑ, ์ด๋ ์ฐ๋ ๋ช
๋ น์ด๊ฐ mknod
๋ช
๋ น์ด ์ด๋ฉฐ, ๊ธฐ๋ณธ์ฌ์ฉ ํ์์ mknod
[์์ฑํ ์ฅ์นํ์ผ ์ด๋ฆ] [์ฅ์นํ์ผํ์] [์ฃผ ๋ฒํธ] [๋ณด์กฐ ๋ฒํธ] ํ์์ด๋ค.
์ฅ์น ํ์ผํ์์ b , (c, u) , p๋ฅผ ์ฌ์ฉ ๊ฐ ์๋ฏธ๋ b=blocks Device, p=FIFO, c, u = Character ํน์ํ์ผ์ ์๋ฏธ ํ๋ค.
์ฃผ ๋ฒํธ์ ๋ณด์กฐ ๋ฒํธ๋ ํน๋ณํ ์๋ฏธ๋ ์์ผ๋ฉฐ, ๋น์ทํ ์ญํ ์ ์งํํ๋ ์ฅ์น ํ์ผ๊ฐ ์ฃผ ๋ฒํธ๋ ํต์ผํด์ ์ฌ์ฉํ๊ณ ๋ณด์กฐ ๋ฒํธ๋ก ๊ฐ ์ฅ์น๋ฅผ ์ธ๋ถ ๊ตฌ๋ถํ๋ ํํ๋ก ์ฐ์ธ๋ค.
ํต์์ ์ผ๋ก md ์ฅ์น์ ์ฃผ ๋ฒํธ ๊ฐ์ 9๋ฒ์ผ๋ก ํต์ผํ์ฌ ์ฌ์ฉํ๋ค.
mknod /dev/md0 b 9 0
Now, we will use โmdadmโ command to create raid devices
[root@Linux-1 ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Fail to create md0 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
์์์ ์์ฑํ ์ฅ์น ํ์ผ์ Raid 0์ ์ ๋ณด๋ฅผ ์
๋ ฅ ํ๋ค.
[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=Linux-1:0 UUID=d35b06b8:3ba0c441:8ba52bb1:02fa155d
์ ๋ณด ์
๋ ฅ ํ ํ์ธํ๊ฒ ๋๋ฉด, ์
๋ ฅ ๋ ์ ๋ณด๋ฅผ ํ ๋๋ก ์ฅ์น์ UUID๊ฐ ๋ฑ์ด ํ์๋๋ ๊ฒ์ ํ์ธ
[root@Linux-1 ~]# mdadm --query --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jan 25 12:41:04 2023
Raid Level : raid0
Array Size : 2091008 (2042.00 MiB 2141.19 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jan 25 12:41:04 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : Linux-1:0 (local to host Linux-1)
UUID : d35b06b8:3ba0c441:8ba52bb1:02fa155d
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
Formatting the Raid device with the xfs file system.
[root@Linux-1 ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=512 agcount=8, agsize=65408 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=522752, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Finally, making the /raid0 directory to mount the /dev/md0 into it.
[root@Linux-1 ~]# mkdir /raid0
[root@Linux-1 ~]# mount /dev/md0 /raid0
[root@Linux-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.6M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root 17G 1.6G 16G 10% /
/dev/sda1 1014M 168M 847M 17% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md0 2.0G 33M 2.0G 2% /raid0
We can save md information into the /etc/mdadm.conf as the device number can change when the system reboots.
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Now, letโs configure auto mount,
[root@Linux-1 ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root / xfs defaults 0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot xfs defaults 0 0
/dev/mapper/centos_linux--1-swap swap swap defaults 0 0
/dev/md0 /raid0 xfs defaults 0 0
Now, we can reboot the system to check if the auto mount works.
Raid - 1 ๊ตฌ์ฑ
๐ ์ค๋ ์ท ์ด๊ธฐ์ค์ ์ํ๋ก ๋ณ๊ฒฝ ํ ๊ฒ !!
We will first make 3 1GBs of hard disks.
Then we will create raid system partitions for sdb, sdc and sdd as we did above using fdisk command.
[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
We will use mknod command here,
[root@Linux-1 ~]# mknod /dev/md1 b 9 1
Now we will use mdadm command.
[root@Linux-1 ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? Y
mdadm: Fail to create md1 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
Raid 1๋ฒ์ ๊ฒฝ์ฐ ์ค๋ณต๋ฐ์ดํฐ๋ฅผ ์ ์ฅ ํ๊ธฐ ๋๋ฌธ์ ๊ธฐ๋ณธ ์ ์ผ๋ก /boot์ ๊ดํ ๋ถ๋ถ์ Raid๋ก ์ค์ ์ ํ๋ฉด ์ ๋๋ค.
์ฆ, ๋ถํ
๊ณผ ๊ด๋ จ ๋ ๋ฐ์ดํฐ๋ Raid 1๋ฒ ์ฅ์น์๋ ์ ํฉํ ํํ๊ฐ ์๋๋ค.
[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.2 name=Linux-1:1 UUID=91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Jan 25 14:10:18 2023
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jan 25 14:10:23 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : Linux-1:1 (local to host Linux-1)
UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
Formatting using the xfs file system,
[root@Linux-1 ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1 isize=512 agcount=4, agsize=65408 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=261632, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=855, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Finally, we will mount the /dev/md1 partition,
[root@Linux-1 ~]# mkdir /raid1
[root@Linux-1 ~]# mount /dev/md1 /raid1
We can confirm,
[root@Linux-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.6M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root 17G 1.6G 16G 9% /
/dev/sda1 1014M 168M 847M 17% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md1 1019M 33M 987M 4% /raid1
To make the md information be saved to the .conf file,
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Configuring auto mount,
[root@Linux-1 ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root / xfs defaults 0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot xfs defaults 0 0
/dev/mapper/centos_linux--1-swap swap swap defaults 0 0
/dev/md1 /raid1 xfs defaults 0 0
Now, we can reboot the system to check if the auto mount works.
We can now test if our raid 1 is working. We will turn off the linux system and remove one hard disk from the VM machine. (Remove the RAID1 hard disk partition).
This will trigger an issue in the RAID 1 and so we will use a new HDD 1GB to perform the backup.
[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Jan 25 14:10:18 2023
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Wed Jan 25 14:29:00 2023
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : Linux-1:1 (local to host Linux-1)
UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
Events : 19
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
๐ก ์ฌ๊ธฐ์ ์ฃผ์ ํ ์ ์ผ๋ก๋ ๋ฌผ๋ฆฌ์ ์ธ HDD๋ฅผ ์ญ์ ๋ฅผ ํ๊ฒ ๋๋ฉด fdisk -l /dev/sd*
๋ช
๋ น์ด๋ฅผ ์ด์ฉํ์ฌ ํ์ธํ๋ฉด ๊ฐ Disk์ ์ํ๋ฒณ ์ด๋ฆ์ด ๋ฌ๋ผ์ง๋ค.
๊ธฐ์กด์ 3๊ฐ์ HDD์์ 2๊ฐ๋ก ๋ณ๊ฒฝ๋์๋๋ฐ ๊ธฐ์กด์ ์ด๋ฆ์ /dev/sdb
, /dev/sdc
, /dev/sdd
๋ก ํ์ ๋์์ง๋ง, /dev/sdc
Disk ํ๋๋ฅผ ์ญ์ ํ ์ง๊ธ
ํ์ธํด ๋ณด๋ฉด /dev/sdb
, /dev/sdd
๊ฐ ์๋ /dev/sdb
, /dev/sdc
๋ก ํ์ ๋๋ ๊ฒ์ ํ์ธ ํ ์ ์๋ค, ๊ธฐ๋ณธ ์ ์ผ๋ก System์ด ๋ถํ
๋๋ฉด์
์ญ์ HDD์ ์ด๋ฆ์ ๋น์ด ๋๊ณ ์ฌ์ฉํ๋ ๊ฒ์ด ์๋, ์ญ์ Disk ๋ค์ Disk์๊ฒ ์์ฐจ์ ์ธ ์ด๋ฆ๋ถ์ฌ๋ฅผ ์งํํ๊ฒ ๋๋ค.
๐ซ ๊ธฐ๋ณธ์ ์ผ๋ก ์ฐ๋ฆฌ๋ HDD๋ฅผ ์ ๊ฑฐ ํ์๊ธฐ ๋๋ฌธ์, Removed ์ํ๋ก ํ์๊ฐ ๋๋ค.
ํ์ง๋ง, ์ค์ ์
๋ฌด์์๋ HDD๊ฐ ์ ๊ฑฐ๋๋ ์ผ์ ๊ฑฐ์ ์๊ณ ๋๋ถ๋ถ HDD์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ ์ํ์ด๋ค, ์ด๋ ์ฅ์น์ ์ํ๋ฅผ Failed ์ํ๋ผ ๋งํ๋ค,
Failed ์ํ์ผ ๋์๋ ๋ฌธ์ ๊ฐ ์๊ธด Failed Disk๋ฅผ ๋จผ์ md ์ฅ์น์์ ์ ๊ฑฐ ํ ๋ณต๊ตฌ ์์
์ ์งํ ํด์ผ ํ๋ค.
Failed ์ฅ์น ์ ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์์
- umount /dev/md1 ( Mount ํด์ )
- mdadm /dev/md1 -r /dev/sdc1 ( Failed ์ฅ์น MD์ฅ์น์์ ์ ๊ฑฐ )
- ๋ณต๊ตฌ ์์ ์งํ
[root@Linux-1 ~]# mdadm /dev/md1 --add /dev/sdc1
mdadm: added /dev/sdc1
Now if we check,
[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Jan 25 14:10:18 2023
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jan 25 14:48:40 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : Linux-1:1 (local to host Linux-1)
UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
Events : 38
Number Major Minor RaidDevice State
2 8 33 0 active sync /dev/sdc1
1 8 17 1 active sync /dev/sdb1
For the final step,
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Raid - 5 ๊ตฌ์ฑ
๐ ์ค๋ ์ท ์ด๊ธฐ์ค์ ์ํ๋ก ๋ณ๊ฒฝ ํ ๊ฒ !!
We will first make 5 1GBs of hard disks.
Then we will create raid system partitions for sdb, sdc , sdd, sde and sdf as we did using fdisk command.
[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
Using the mknod command,
[root@Linux-1 ~]# mknod /dev/md5 b 9 5
Using mdadm command,
[root@Linux-1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
Confirming the details,
[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md5 metadata=1.2 name=Linux-1:5 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847
Confirming query details,
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
.
.
.
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
Formatting this partition,
[root@Linux-1 ~]# mkfs.xfs /dev/md5
meta-data=/dev/md5 isize=512 agcount=8, agsize=98048 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=784128, imaxpct=25
= sunit=128 swidth=384 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Making an empty directory and mounting this partition,
[root@Linux-1 ~]# mkdir /raid5
[root@Linux-1 ~]# mount /dev/md5 /raid5
Confirming the mount,
[root@Linux-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.6M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root 17G 1.6G 16G 9% /
/dev/sda1 1014M 168M 847M 17% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md5 3.0G 33M 3.0G 2% /raid5
Saving md details to the .conf file,
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Making the auto mount,
[root@Linux-1 ~]# vi /etc/fstab
/dev/md5 /raid5 xfs defaults 0 0
We can confirm after the reboot if the auto mount is working correctly.
Raid - 5 ๋ณต๊ตฌ ์์
halt
๐ ์ด ์์ ์ผ๋ก ์ธํ์ฌ ๊ธฐ์กด์ RAID 5๋ฒ์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๊ฒ ๋ ๊ฒ์ด๋ฉฐ, ์ฐ๋ฆฌ๋ ์๋ก์ด HDD 1GB๋ฅผ ์ด์ฉํ์ฌ ๋ณต๊ตฌ ์์ ์ ์งํํ๋ค
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Jun 23 12:45:34 2017
Raid Level : raid5
Array Size : 3139584 (2.99 GiB 3.21 GB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Jun 23 12:59:29 2017
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : RAID:5 (local to host RAID)
UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
Events : 22
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
- 0 0 3 removed
๐ ๊ธฐ๋ณธ์ ์ผ๋ก ์ฐ๋ฆฌ๋ HDD๋ฅผ ์ ๊ฑฐ ํ์๊ธฐ ๋๋ฌธ์, Removed ์ํ๋ก ํ์๊ฐ ๋๋ค. ํ์ง๋ง, ์ค์ ์ ๋ฌด์์๋ HDD๊ฐ ์ ๊ฑฐ๋๋ ์ผ์ ๊ฑฐ์ ์๊ณ ๋๋ถ๋ถ HDD์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ ์ํ์ด๋ค, ์ด๋ ์ฅ์น์ ์ํ๋ฅผ Failed ์ํ๋ผ ๋งํ๋ค.
Failed ์ํ์ผ ๋์๋ ๋ฌธ์ ๊ฐ ์๊ธด Failed Disk๋ฅผ ๋จผ์ md ์ฅ์น์์ ์ ๊ฑฐ ํ ๋ณต๊ตฌ ์์ ์ ์งํ ํด์ผ ํ๋ค.
Failed ์ฅ์น ์ ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์์
- umount /dev/md5 ( Mount ํด์ )
- mdadm /dev/md5 -r /dev/sdb1 ( Failed ์ฅ์น MD์ฅ์น์์ ์ ๊ฑฐ )
- ๋ณต๊ตฌ ์์ ์งํ
[root@Linux-1 ~]# mdadm /dev/md5 --add /dev/sde1
mdadm: added /dev/sde1
์์์ ์์ฑํ ๋ณต๊ตฌ์ฉ Disk๋ฅผ ์ด์ฉํ์ฌ md5 ์ฅ์น์ ๋ณต๊ตฌ Partition์ผ๋ก ์ง์ ํ๋ค.
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Jun 23 13:51:00 2017
Raid Level : raid5
Array Size : 3139584 (2.99 GiB 3.21 GB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 23 13:55:40 2017
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 95% complete
Name : RAID:5 (local to host RAID)
UUID : 5b78e0c0:648d86dd:9fa5f44d:fea935de
Events : 39
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 spare rebuilding /dev/sde1
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Jun 23 12:45:34 2017
Raid Level : raid5
Array Size : 3139584 (2.99 GiB 3.21 GB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 23 13:02:36 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : RAID:5 (local to host RAID)
UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
Events : 41
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Raid - 6 ๊ตฌ์ฑ
๐ ์ค๋ ์ท ์ด๊ธฐ์ค์ ์ํ๋ก ๋ณ๊ฒฝ ํ ๊ฒ !!
We will first make 6 1GBs of hard disks.
Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.
[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg
Using the mknod command,
[root@Linux-1 ~]# mknod /dev/md5 b 9 6
Using mdadm command,
[root@Linux-1 ~]# mdadm --create /dev/md6 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
Confirming the details,
[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md6 metadata=1.2 name=Linux-1:6 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847
Confirming query details,
[root@Linux-1 ~]# mdadm --query --detail /dev/md6
.
.
.
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
Formatting this partition,
[root@Linux-1 ~]# mkfs.xfs /dev/md6
meta-data=/dev/md6 isize=512 agcount=8, agsize=65408 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=523264, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Making an empty directory and mounting this partition,
[root@Linux-1 ~]# mkdir /raid6
[root@Linux-1 ~]# mount /dev/md6 /raid6
Confirming the mount,
[root@Linux-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.6M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root 17G 1.6G 16G 9% /
/dev/sda1 1014M 168M 847M 17% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md6 2.0G 33M 2.0G 2% /raid6
Saving md details to the .conf file,
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Making the auto mount,
[root@Linux-1 ~]# vi /etc/fstab
/dev/md6 /raid6 xfs defaults 0 0
We can confirm after the reboot if the auto mount is working correctly.
Raid - 6 ๋ณต๊ตฌ ์์
halt
๐ ์ด ์์ ์ผ๋ก ์ธํ์ฌ ๊ธฐ์กด์ Raid 6๋ฒ์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๊ฒ ๋ ๊ฒ์ด๋ฉฐ, ์ฐ๋ฆฌ๋ HDD 1GB * 2๊ฐ๋ฅผ ์ด์ฉํ์ฌ ๋ณต๊ตฌ ์์ ์ ์งํํ๋ค.
[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Fri Jun 23 14:02:07 2017
Raid Level : raid6
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Jun 23 14:09:38 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : RAID:6 (local to host RAID)
UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
Events : 21
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
- 0 0 2 removed
- 0 0 3 removed
๐ ๊ธฐ๋ณธ์ ์ผ๋ก ์ฐ๋ฆฌ๋ HDD๋ฅผ ์ ๊ฑฐ ํ์๊ธฐ ๋๋ฌธ์, Removed ์ํ๋ก ํ์๊ฐ ๋๋ค.
ํ์ง๋ง, ์ค์ ์
๋ฌด์์๋ HDD๊ฐ ์ ๊ฑฐ๋๋ ์ผ์ ๊ฑฐ์ ์๊ณ ๋๋ถ๋ถ HDD์ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๋ ์ํ์ด๋ค, ์ด๋ ์ฅ์น์ ์ํ๋ฅผ Failed ์ํ๋ผ ๋งํ๋ค, Failed ์ํ์ผ ๋์๋ ๋ฌธ์ ๊ฐ ์๊ธด Failed Disk๋ฅผ ๋จผ์ md ์ฅ์น์์ ์ ๊ฑฐ ํ ๋ณต๊ตฌ ์์
์ ์งํ ํด์ผ ํ๋ค.
Failed ์ฅ์น ์ ๊ฑฐ ๋ฐ ๋ณต๊ตฌ ์์
- umount /dev/md1 ( Mount ํด์ )
- mdadm /dev/md1 -r /dev/sdb1 ( Failed ์ฅ์น MD์ฅ์น์์ ์ ๊ฑฐ )
- ๋ณต๊ตฌ ์์ ์งํ
[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sdd1
mdadm: added /dev/sdd1
[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sde1
mdadm: added /dev/sde1
๋ณต๊ตฌ์ฉ HDD๋ฅผ ์ด์ฉํ์ฌ, ๋ณต๊ตฌ ์์
์งํ
[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Fri Jun 23 14:02:07 2017
Raid Level : raid6
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 23 14:14:09 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : RAID:6 (local to host RAID)
UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
Events : 58
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
5 8 65 3 active sync /dev/sde1
๋ณต๊ตฌ ์์
์๋ฃ ํ ์ํ ํ์ธ
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Raid - 1+0 ๊ตฌ์ฑ
๐ ์ค๋ ์ท ์ด๊ธฐ์ค์ ์ํ๋ก ๋ณ๊ฒฝ ํ ๊ฒ !!
We will first make 6 1GBs of hard disks.
Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.
[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg
Using the mknod command,
[root@Linux-1 ~]# mknod /dev/md5 b 9 10
Using mdadm command,
[root@Linux-1 ~]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
Confirming the details,
[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md10 metadata=1.2 name=RAID:10 UUID=3d4080a1:2669cb55:1411317c:dcdf8fbd
Confirming query details,
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
.
.
.
Number Major Minor RaidDevice State
0 8 17 0 active sync set-A /dev/sdb1
1 8 33 1 active sync set-B /dev/sdc1
2 8 49 2 active sync set-A /dev/sdd1
3 8 65 3 active sync set-B /dev/sde1
Formatting this partition,
[root@Linux-1 ~]# mkfs.xfs /dev/md10
meta-data=/dev/md10 isize=512 agcount=8, agsize=65408 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=523264, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Making an empty directory and mounting this partition,
[root@Linux-1 ~]# mkdir /raid10
[root@Linux-1 ~]# mount /dev/md10 /raid10
Confirming the mount,
[root@Linux-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 475M 0 475M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 7.6M 479M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root 17G 1.6G 16G 9% /
/dev/sda1 1014M 168M 847M 17% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md10 2.0G 33M 2.0G 2% /raid10
Saving md details to the .conf file,
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Making the auto mount,
[root@Linux-1 ~]# vi /etc/fstab
/dev/md10 /raid10 xfs defaults 0 0
We can confirm after the reboot if the auto mount is working correctly.
Raid - 1+0 ๋ณต๊ตฌ ์์
halt
๐ Raid 1+0์ ๊ฒฝ์ฐ ๊ฐ์ ๋์คํฌ๋ฅผ ๊ฐ์ ๋ก ์ญ์ ํ๊ฒ ๋๋ฉด, MD์ฅ์น๊ฐ ์ฌ๋ผ์ง๋ฏ๋ก, ๊ฐ์ ๋ก Failed ์ํ๋ก ๋ง๋ ํ ๋ณต๊ตฌ ์์
์ ์งํํ๋ค.
๋จ, ์ฃผ์ ํ ๊ฒ์ Set์ผ๋ก ๋ฌถ์ธ HDD 2๊ฐ๋ฅผ ๋์์ Failed ํ๋ฉด ์ ๋๋ค, ๋ฐ๋์ ๊ฐ Set ๋ง๋ค 1๊ฐ์ฉ๋ง Failed ํ ๊ฒ.
( /dev/sdb , /dev/sde ๋ฅผ Failed ํ๋ฉด ๋๋ค. )
๐ก ๋ณต๊ตฌ๊ฐ ๊ฐ๋ฅํ Raid 1 , Raid 5 , Raid 6 ๊ฐ์ ๊ฒฝ์ฐ์๋ HDD๋ฅผ ์ ๊ฑฐํ์ฌ๋ ๋ถํ
์ด ๊ฐ๋ฅํ์ง๋ง, Raid 0 , Raid 1+0์ ๋ถ๊ฐ๋ฅ ํ๋ค.
์ฆ๋ช
, Raid 1 , 5 , 6์ ๊ตฌ์ฑ ํ Mount ์ค์ ๊ฐ ์ฅ์น์ Data๋ฅผ ์์ฑ HDD 1๊ฐ ํน์ 2๊ฐ๋ฅผ ์ง์ฐ๊ณ ์ฌ ๋ถํ
ํด๋ ํด๋น Data๋ ๋ณต๊ตฌ๊ฐ ๊ฐ๋ฅํด ์ ์์ ์ผ๋ก ํ์ ๋๋ค.
[root@Linux-1 ~]# umount /dev/md10
[root@Linux-1 ~]# mdadm /dev/md10 -f /dev/sdb1 /dev/sde1
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Fri Jun 23 16:03:46 2017
Raid Level : raid10
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 23 16:08:31 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : RAID:10 (local to host RAID)
UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
Events : 19
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 33 1 active sync set-B /dev/sdc1
2 8 49 2 active sync set-A /dev/sdd1
- 0 0 3 removed
0 8 17 - faulty /dev/sdb1
3 8 65 - faulty /dev/sde1
[root@Linux-1 ~]# mdadm /dev/md10 -r /dev/sdb1 /dev/sde1
mdadm: hot removed /dev/sdb1 from /dev/md10
mdadm: hot removed /dev/sde1 from /dev/md10
[root@Linux-1 ~]# reboot
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
dev/md10:
Version : 1.2
Creation Time : Fri Jun 23 16:03:46 2017
Raid Level : raid10
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Jun 23 16:13:57 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : RAID:10 (local to host RAID)
UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
Events : 21
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 33 1 active sync set-B /dev/sdc1
2 8 49 2 active sync set-A /dev/sdd1
- 0 0 3 removed
[root@Linux-1 ~]# mdadm /dev/md10 --add /dev/sdf1 /dev/sdg1
mdadm: added /dev/sdf1
mdadm: added /dev/sdg1
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Fri Jun 23 16:03:46 2017
Raid Level : raid10
Array Size : 2093056 (2044.00 MiB 2143.29 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 23 16:29:22 2017
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2
Layout : near=2
Chunk Size : 512K
Name : RAID:10 (local to host RAID)
UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
Events : 48
Number Major Minor RaidDevice State
5 8 97 0 active sync set-A /dev/sdg1
1 8 33 1 active sync set-B /dev/sdc1
2 8 49 2 active sync set-A /dev/sdd1
4 8 81 3 active sync set-B /dev/sdf1
๋ณต๊ตฌ ์์
์๋ฃ ํ ์ํ ํ์ธ
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Deleting the RAID array
๐ After this hands on, I needed to delete the RAID system that we configured
-
mount ๊ด๋ จ ์ ๋ณด ์ญ์ ํ๊ธฐ
[root@Linux-1 ~]# umount /dev/md10 [root@Linux-1 ~]# vi /etc/fstab
-
md ์ฅ์น ์ญ์ ํ๊ธฐ
[root@Linux-1 ~]# mdadm -S /dev/md10 mdadm: stopped /dev/md10
-
md ์ฅ์น์์ ์ฌ์ฉํ Partition superblock ์ด๊ธฐํ ํ๊ธฐ
[root@Linux-1 ~]# mdadm --zero-superblock /dev/sdb1 [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdc1 [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdd1 [root@Linux-1 ~]# mdadm --zero-superblock /dev/sde1
In this post, I discussed what RAID is and how we can set them up. I also mentioned how to remove failed device and restore from the backup RAID.
Posted on February 25, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.