
Linux 소프트웨어 RAID에서 고장난 하드 디스크를 교체하는 방법
1 서문
이 예에서는/dev/sda1 및/dev/sda2,/dev/sdb1 및/dev/sdb2 파티션이 있는/dev/sda 및/dev/sdb라는 두 개의 하드 드라이브가 있습니다.
/dev/sda1 및/dev/sdb1은 RAID1 어레이/dev/md0을 구성합니다.
/dev/sda2 및/dev/sdb2는 RAID1 어레이/dev/md1을 구성합니다.
/dev/sda1 + /dev/sdb1 = /dev/md0
/dev/sda2 + /dev/sdb2 = /dev/md1
/dev/sdb has failed, and we want to replace it.
2 하드 디스크가 고장났는지 어떻게 알 수 있습니까?
If a disk has failed, you will probably find a lot of error messages in the log files, e.g. /var/log/messages or /var/log/syslog.
You can also run
cat /proc/mdstat
and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array.
3 장애가 발생한 디스크 제거
To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1).
First we mark /dev/sdb1 as failed:
mdadm --manage /dev/md0 --fail /dev/sdb1
The output of
cat /proc/mdstat
should look like this:
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[2](F)
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>
Then we remove /dev/sdb1 from /dev/md0:
mdadm --manage /dev/md0 --remove /dev/sdb1
The output should be like this:
server1:~# mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1
And
cat /proc/mdstat
should show this:
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>
Now we do the same steps again for /dev/sdb2 (which is part of /dev/md1):
mdadm --manage /dev/md1 --fail /dev/sdb2
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0] sdb2[2](F)
24418688 blocks [2/1] [U_]
unused devices: <none>
mdadm --manage /dev/md1 --remove /dev/sdb2
server1:~# mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm: hot removed /dev/sdb2
cat /proc/mdstat
server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0]
24418688 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
24418688 blocks [2/1] [U_]
unused devices: <none>
Then power down the system:
shutdown -h now
and replace the old /dev/sdb hard drive with a new one (it must have at least the same size as the old one - if it's only a few MB smaller than the old one then rebuilding the arrays will fail).
4 새 하드 디스크 추가
After you have changed the hard disk /dev/sdb, boot the system.
The first thing we must do now is to create the exact same partitioning as on /dev/sda. We can do this with the command sgdisk from the gdisk package. If you havent installed gdisk yet, run this command to install it on Debian and Ubuntu:
apt-get install gdisk
For RedHat based Linux distributions like CentOS use:
yum install gdisk
and for OpenSuSE use:
yast install gdisk
The next step is optional but recomended. To ensure that you have a backup of the partition scheme, you can use sgdisk to write the partition schemes of both disks into a file. I will store the backup in the /root folder.
sgdisk --backup=/root/sda.partitiontable /dev/sda
sgdisk --backup=/root/sdb.partitiontable /dev/sdb
장애가 발생한 경우 sgdisk 명령의 --load-backup 옵션을 사용하여 파티션 테이블을 복원할 수 있습니다.이제 파티션 구성표를/dev/sda에서/dev/sdb로 복사합니다.
sgdisk -R /dev/sdb /dev/sda
그런 다음 새 하드 디스크의 GUID를 무작위로 지정하여 고유한지 확인해야 합니다.sgdisk -G /dev/sdb당신은 실행할 수 있습니다
sgdisk -p /dev/sda두 하드 드라이브에 동일한 파티션이 있는지 확인합니다.
sgdisk -p /dev/sdb
다음으로/dev/sdb1을/dev/md0에 추가하고/dev/sdb2를/dev/md1에 추가합니다.
mdadm --manage /dev/md0 --add /dev/sdb1
server1:~# mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: re-added /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdb2
server1:~# mdadm --manage /dev/md1 --add /dev/sdb2이제 두 어레이(/dev/md0 및/dev/md1)가 동기화됩니다. 운영
mdadm: re-added /dev/sdb2
cat /proc/mdstat언제 완료되는지 확인합니다.
동기화하는 동안 출력은 다음과 같습니다.
server1:~# cat /proc/mdstat동기화가 완료되면 출력은 다음과 같습니다.
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 9.9% (2423168/24418688) finish=2.8min speed=127535K/sec
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/1] [U_]
[=>...................] recovery = 6.4% (1572096/24418688) finish=1.9min speed=196512K/sec
unused devices: <none>
server1:~# cat /proc/mdstat그게 다야, 당신은 성공적으로/dev/sdb를 교체했습니다!
Personalities : [linear] [multipath] [raid0] [raid1] [raid5] [raid4] [raid6] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
24418688 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
24418688 blocks [2/2] [UU]
unused devices: <none>