All Categories


Linux Software Raid 1 - Creating a Raid 1 Mirror with mdadm


Ever wanted the benefits of Raid 1 without the cost of extra hardware? As the steps below show, its very simple to create a software raid 1 mirror on linux with mdadm. The system used in this guide is a Centos 7 PC, however mdadm will work the same on any linux flavour.


STEP#1 Install mdadm on your system?

Centos & Fedora

yum install mdadm

Ubuntu & Debian

apt-get update && apt-get install mdadm



STEP#2 Software Raid Partitions 0xfd

To setup software raid 1 you must have partitions of a specific type: "0xfd" - "Linux raid autodetect". These partitions also need to be exactly the same size. As shown below the system used in this guide has 4 x 500GB disks (/dev/sda, /dev/sdb, /dev/sdc & /dev/sdd) of which only 2 (/dev/sdc, /dev/sdd) are setup with "Linux raid autodetect" partitions.

[root@localhost /]# fdisk -l

Disk /dev/sda: 500.1 GB, 500106780160 bytes, 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00092b7c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   975128575   487563264   83  Linux

Disk /dev/sdb: 500.1 GB, 500106780160 bytes, 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00092b7c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048   975128575   487563264   83  Linux


Disk /dev/sdc: 500.1 GB, 500106780160 bytes, 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00092b7c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048   975128575   487563264   fd  Linux raid autodetect

Disk /dev/sdd: 500.1 GB, 500106780160 bytes, 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0001c3e1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048   975128575   487563264   fd  Linux raid autodetect

Disk /dev/md0: 487.6 GB, 487587643392 bytes, 952319616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@localhost /]#

To setup raid partitions on your own system using fdisk please refer to our other guide on using fdisk.


STEP#3 Overwrite any old md superblocks

If you are setting up software raid 1 for the first time on 2 brand new disks then you can skip this step. If you are setting up software raid 1 on used disks that previously had a mdadm raid array on them then you'll need to perform this step to clear the previous superblock. Note this command permanently clears the mdadm superblock from the disks, so any existing raid arrays on these disks will be erased. If you're creating multiple raid 1 arrays, as in the above example system, then you would only perform this command once, at the start.

[root@localhost /]# mdadm --zero-superblock /dev/sdc /dev/sdd  
[root@localhost /]# 



STEP#4 Create Raid 1 Array with mdadm

Simply perform the following mdadm commands to create the Raid 1 array and then verify the array status

[root@localhost /]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
[root@localhost /]#
[root@localhost /]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[0] sdd1[1]
      943349760 blocks super 1.1 [2/2] [UU]
      [========>............]  check = 42.1% (397700160/943349760) finish=2.1min speed=60028K/sec
      bitmap: 7/8 pages [28KB], 65536KB chunk

unused devices: 
[root@localhost /]# 

Note you can watch the syncing of the disks by repeating the "cat /proc/mdstat" command. Even though its syncing its still active and ready to use.


STEP#5 Setup Filesystem on raid array

You now need to setup a filesystem on the raid array so you can create directories and files etc. You can setup any filesystem you like but on Linux systems Ext4 is the most commonly used filesystem and is the filesystem we will setup here:

[root@localhost /]# mkfs.ext4 /dev/md0
mke2fs 1.41.3 (18-Jan-2016)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
92060160 inodes, 487563264 blocks
50411726 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=887563264
6355 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost /]# 



STEP#6 Mount Raid 1 array /dev/md0

Now we simply need to create a directory for the Raid 1 array and mount it. Below for this example we have created the directory /raid1. The "df -h" command can then be used to verify the mount was successful.

[root@localhost /]# mkdir /raid1
[root@localhost /]# mount /dev/md0 /raid1
[root@localhost /]# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       454G   80G  375G  18% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.8G   77M  3.7G   2% /dev/shm
tmpfs           3.8G  9.0M  3.8G   1% /run
tmpfs           3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/sdb1       997M  183M  814M  19% /boot
tmpfs           772M   48K  772M   1% /run/user/1000
/dev/md0        487G     0  480G   0% /raid1
[root@localhost /]#



STEP#7 Update /etc/mdadm.conf File

Centos & Fedora

[root@localhost ~]# mdadm --detail --scan >>/etc/mdadm.conf
[root@localhost ~]# 

Ubuntu & Debian

[root@localhost ~]# mdadm --detail --scan >>/etc/mdadm/mdadm.conf
[root@localhost ~]# 



STEP#8 Automount on Reboot with /etc/fstab

First we need to get the UUID of the new Raid 1 array device.

[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.1
  Creation Time : Mon Jan 18 21:34:38 2016
     Raid Level : raid1
     Array Size : 486859712 (486.90 GiB 490.90 GB)
  Used Dev Size : 479929856 (457.70 GiB 491.45 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jan 18 21:53:01 2016
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : localhost:2  (local to host localhost)
           UUID : 1e4872b5:e4d38a6d:9ee1be63:b2a49125
         Events : 962261

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sdc1
       1       8       19        1      active sync   /dev/sdd1
[root@localhost ~]#

From this we can see the UUID = 1e4872b5:e4d38a6d:9ee1be63:b2a49125

Now we add the below line to the bottom of the /etc/fstab file.

UUID=1e4872b5:e4d38a6d:9ee1be63:b2a49125 /raid1                       ext4    defaults        1 1

Now each time your system reboots the /dev/md0 raid 1 array will be auto-mounted on the /raid1 directory.


And that's it ! You now have a working Linux Software Raid 1 array !




About the Author

Administrator

Most Viewed - All Categories