Install mdadm:
sudo apt-get install mdadm
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
To get started, find the identifiers for the raw disks that you will be using:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT ----------------------------------------- Output NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G ext4 disk sdb 100G disk sdc 100G disk
As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sdb and /dev/sdc identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 1 array with these components, pass them in to the mdadm –create command. You will have to specify the device name you wish to create (/dev/md0 in our case), the RAID level, and the number of devices:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
If the component devices you are using are not partitions with the boot flag enabled, you will likely be given the following warning. It is safe to type y to continue:
Output
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
cat /proc/mdstat
------------------
Output
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[1] sdb1[0]
1953382464 blocks super 1.2 [2/2] [UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
[====>................] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0 device has been created in the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The second highlighted line shows the progress on the mirroring. You can continue the guide while this process completes.
Next, create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
sudo mkdir -p /media/md0
You can mount the filesystem by typing:
sudo mount /dev/md0 /media/md0
Check whether the new space is available by typing:
df -h -x devtmpfs -x tmpfs -------------------------- Output Filesystem Size Used Avail Use% Mounted on /dev/vda1 20G 1.1G 18G 6% / /dev/md0 99G 60M 94G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf file. You can automatically scan the active array and append the file by typing:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 1 array should now automatically be assembled and mounted each boot.
cat /proc/mdstat
-----------------
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[1] sdb1[0]
1953382464 blocks super 1.2 [2/2] [UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices: <none>
mdadm -D /dev/md0
------------------
/dev/md0:
Version : 1.2
Creation Time : Sat Sep 2 18:49:16 2017
Raid Level : raid1
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Apr 16 17:01:07 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : backupserver:0 (local to host backupserver)
UUID : 2de2b9e9:b9d74a05:32c8e471:8a6437b2
Events : 4925
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
To obtain detailed information about each raid device, run this command:
sudo mdadm --examine /dev/sdb1 /dev/sdc1
-----------------------------------------
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 2de2b9e9:b9d74a05:32c8e471:8a6437b2
Name : backupserver:0 (local to host backupserver)
Creation Time : Sat Sep 2 18:49:16 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=48 sectors
State : clean
Device UUID : c22a3b70:493acf3d:caa4fd40:1c3787c3
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Apr 16 22:29:11 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 3dc3d8da - correct
Events : 4943
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 2de2b9e9:b9d74a05:32c8e471:8a6437b2
Name : backupserver:0 (local to host backupserver)
Creation Time : Sat Sep 2 18:49:16 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3906764943 (1862.89 GiB 2000.26 GB)
Array Size : 1953382464 (1862.89 GiB 2000.26 GB)
Used Dev Size : 3906764928 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=15 sectors
State : clean
Device UUID : b6935080:9c85db10:9ba82f62:e28ae364
Internal Bitmap : 8 sectors from superblock
Update Time : Fri Apr 16 22:29:11 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : e373e497 - correct
Events : 4943
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
You can force a check of the entire array while it's online. For example, to check the array on /dev/md0, run as root:
echo check > /sys/block/md0/md/sync_action
You should see your inactive RAID with:
cat /proc/mdstat
Reassemble the array:
sudo mdadm --assemble /dev/md_d0 /dev/sdb1 /dev/sdc1 sudo mdadm --detail --scan
The output on my system is:
ARRAY /dev/md/d0 level=raid1 num-devices=2 metadata=00.90 UUID=bcb40263:0fe2be0e:4a925ea7:19eea675
Copy the whole ARRAY .. line to the end of /etc/mdadm/mdadm.conf Now your raidconfig is preserved and you can use /dev/md0 as a normal device For example sudo mount /dev/md0 /mnt/raid