RAID 1 5
=================================================================
RAID 1
1)查看分区信息
[root@zyl mnt]# fdisk -l|grep /dev/sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1 * 1 26 204800 83 Linux
/dev/sda2 26 154 1024000 82 Linux swap / Solaris
/dev/sda3 154 2611 19741696 83 Linux
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 5368 MB, 5368709120 bytes
Disk /dev/sde: 5368 MB, 5368709120 bytes
-----------------------------------------------------------------
2).fdisk分区
要注意的是:
Command (m for help): t ====>>做raid需fd文件类型
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
其他步骤简单。
-----------------------------------------------------------------
3).[root@zyl mnt]# fdisk -l|grep /dev/sd
Disk /dev/sdb: 5368 MB, 5368709120 bytes
/dev/sdb1 1 652 5237158+ fd Linux raid autodetect
Disk /dev/sdc: 5368 MB, 5368709120 bytes
/dev/sdc1 1 652 5237158+ fd Linux raid autodetect
Disk /dev/sdd: 5368 MB, 5368709120 bytes
/dev/sdd1 1 652 5237158+ fd Linux raid autodetect
Disk /dev/sde: 5368 MB, 5368709120 bytes
/dev/sde1 1 652 5237158+ fd Linux raid autodetect
-----------------------------------------------------------------
4).查看分区是否支持raid。有aa55、fd标识则说明支持
[root@zyl mnt]# mdadm -E /dev/sd[b-e]
/dev/sdb:
MBR Magic : aa55
Partition[0] : 10474317 sectors at 63 (type fd)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 10474317 sectors at 63 (type fd)
/dev/sdd:
MBR Magic : aa55
Parttion[0] : 10474317 sectors at 63 (type fd)
/dev/sde:
MBR Magic : aa55
Partition[0] : 10474317 sectors at 63 (type fd)
-----------------------------------------------------------------
5).创建raid1,将硬盘的分区添加到raid设备中。
[root@zyl mnt]# mdadm -C md1 -l 1 -n 2 /dev/sdb1 /dev/sdc1 ===>>-l 指定等级。man mdadm
mdadm: Note: this array has metadata at the start and ===>>array:阵列
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md1 started.
-----------------------------------------------------------------
6).查看raid信息
[root@zyl mnt]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdc1[1] sdb1[0]
5233024 blocks super 1.2 [2/2] [UU]
unused devices: <none>
-----------------------------------------------------------------
7).格式化raid设备
[root@zyl mnt]# mkfs.ext4 /dev/md/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1308256 blocks
65412 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@zyl mnt]#
-----------------------------------------------------------------
8).创建挂载点,手动挂载已经格式化的raid设备。(不过可以直接第十步*自动挂载)
[root@zyl mnt]# mkdir m1
[root@zyl mnt]# mount /dev/md/md1 /mnt/m1/
[root@zyl mnt]# ls m1
lost+found ====>>说明挂载成功
-----------------------------------------------------------------
9).测试文件写入到raid设备
[root@zyl m1]# touch raid1
[root@zyl m1]# ls
lost+found raid1
-----------------------------------------------------------------
10).在/etc/fstab 文件中添加raid挂载记录,实现自动挂载
[root@zyl m1]# vim /etc/fstab
/dev/md/md1 /mnt/m1 ext4 defaults 0 0
[root@zyl m1]# ls /etc/md*
ls: cannot access /etc/md*: No such file or directory
[root@zyl m1]# mdadm -Dsv >>/etc/mdadm.conf
[root@zyl m1]# cat /etc/mdadm.conf
ARRAY /dev/md/md1 level=raid1 num-devices=2 metadata=1.2 name=zyl:md1 UUID=30fd9147:48f07e11:18858067:2b77b173
devices=/dev/sdb1,/dev/sdc1
[root@zyl m1]#
-----------------------------------------------------------------
11).reboot系统,看raid是否自动挂载,能否在raid中正常读写数据
[root@zyl m1]# touch nice
[root@zyl m1]# ls
lost+found nice raid1
-----------------------------------------------------------------
RAID1成功。
=================================================================
RAID 5
实战要求:用4块硬盘做raid5,其中dev/sd[d-f]1作raid5,sde1作热备
[root@zyl ~]# mdadm -C md5 -l 5 -n 3 -x 1 /dev/sd[d-g]1
mdadm: largest drive (/dev/sdd1) exceeds size (2095104K) by more than 1%
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md5 started.
-----------------------------------------------------------------
[root@zyl ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sdf1[4] sdg1[3](S) sde1[1] sdd1[0]
4190208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
-----------------------------------------------------------------
[root@zyl ~]# ls /dev/md*
/dev/md1 /dev/md127
/dev/md:
md1 md5 md-device-map
-----------------------------------------------------------------
[root@zyl ~]# mdadm -D /dev/md/md5
/dev/md/md5:
Version : 1.2
Creation Time : Fri Jun 2 19:10:29 2017
Raid Level : raid5
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Jun 2 19:10:40 2017
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : zyl:md5 (local to host zyl)
UUID : 02eb1056:090fe9c0:7d8b10bc:7e8ab7e8
Events : 18
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
4 8 81 2 active sync /dev/sdf1
3 8 97 - spare /dev/sdg1
===>>查看到的md5中, /dev/sdg1被自动指定成了spare(热备)
=================================================================
扩展:模拟RAID0出错
[root@zyl ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sdf1[4] sdg1[3](S) sde1[1] sdd1[0]
4190208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdc1[1] sdb1[0]
5233024 blocks super 1.2 [2/2] [UU]
unused devices: <none>
[root@zyl ~]# ls /dev/sd* ===>>查看raid设备文件是否存在
[root@zyl ~]# mdadm -D /dev/md/md1 ===>>查看md1设备的状态
[root@zyl ~]# ls /mnt/m1/
lost+found nice raid1
[root@zyl ~]# touch ll
[root@zyl ~]# mdadm /dev/md/md1 --fail /dev/sdb1 ===>>将m1中的sdb1指定为故障磁盘分区
mdadm: set /dev/sdb1 faulty in /dev/md/md1
[root@zyl m1]# mdadm -D /dev/md1 ===>>查看md1设备的状态,看到sdb1的状态为faulty(有错误的)
/dev/md1:
Version : 1.2
Creation Time : Fri Jun 2 18:32:45 2017
Raid Level : raid1
Array Size : 5233024 (4.99 GiB 5.36 GB)
Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Jun 2 19:36:38 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : zyl:md1 (local to host zyl)
UUID : 30fd9147:48f07e11:18858067:2b77b173
Events : 29
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1
0 8 17 - faulty /dev/sdb1
[root@zyl m1]# touch kkk
[root@zyl m1]# ls
kk kkk lost+found nice raid1 ===>>通过touch和ls的认证,raid1中的另一个磁盘还能正常提供文件的读写操作。
[root@zyl m1]# mdadm /dev/md/md1 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md/md1 ===>>将sdb1从m1这个raid移除。
[root@zyl m1]# mdadm /dev/md/md1 -a /dev/sdh1
mdadm: /dev/sdh1 not large enough to join array ===>>仅此测试
[root@zyl m1]# mdadm /dev/md/md1 -a /dev/sdb1
mdadm: added /dev/sdb1
[root@zyl m1]# mdadm -D /dev/md/md1
/dev/md/md1:
Version : 1.2
Creation Time : Fri Jun 2 18:32:45 2017
Raid Level : raid1
Array Size : 5233024 (4.99 GiB 5.36 GB)
Used Dev Size : 5233024 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Jun 2 19:48:04 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zyl:md1 (local to host zyl)
UUID : 30fd9147:48f07e11:18858067:2b77b173
Events : 57
Number Major Minor RaidDevice State
2 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
=================================================================