mdadm is the software raid tools used in Linux system. One key problem with the software raid, is that it resync is utterly slow comparing with the existing drive speed (SSD or NVMe). The resync speed set by mdadm is default for regardless of whatever the drive type you have. To view the default values, you may run the following:
[root@172 ~]# sysctl dev.raid.speed_limit_min dev.raid.speed_limit_min = 1000 [root@172 ~]# sysctl dev.raid.speed_limit_max dev.raid.speed_limit_max = 200000
As you see, the minimum value starts from 1000 and can max upto 200K. Although, it can max upto 200K, but as min value is too low, mdadm always tries to keep the value below average to your speed available. To speed up, we would like to maximize these numbers. To change the numbers, you may run something like the following:
sysctl -w dev.raid.speed_limit_min=500000 sysctl -w dev.raid.speed_limit_max=5000000
Once done, you may now check the speed is going up immediately:
[root@172 ~]# cat /proc/mdstat Personalities : [raid1] md3 : active raid1 sdb5[1] sda5[0] 1916378112 blocks super 1.2 [2/2] [UU] [===============>.....] resync = 76.2% (1461787904/1916378112) finish=27.4min speed=276182K/sec bitmap: 5/15 pages [20KB], 65536KB chunk md0 : active raid1 sdb1[1] sda1[0] 1046528 blocks super 1.2 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk md2 : active raid1 sda2[0] sdb2[1] 78576640 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md1 : active raid1 sdb3[1] sda3[0] 4189184 blocks super 1.2 [2/2] [UU] unused devices: <none>
One thing to keep in mind is that, if you try to set the value too high, like we did, this might cause some handsome load on your system. If you see the load is unmanageable, you should focus on decreasing the number to something like 50k-100k for the min value.
Making The Sysctl Value Permanent
As we have established the kernel variable values on runtime, this would go back to default once we restart/reboot the server. If you want to persist the values, you need to put these values to /etc/sysctl.conf file. To make them persist, open the sysctl.conf file:
nano /etc/sysctl.conf
Add the following lines at the end of the file:
dev.raid.speed_limit_min = 500000 dev.raid.speed_limit_max = 5000000
Save the file, and run the following command:
sysctl -p
This should persist your values for those variables after the reboot as well as runtime.
Quoting:
“Although, it can max upto 200K, but as min value is too low, mdadm always tries to keep the value below average to your speed available”
Sauce please? 😀
hmm, what’s up with the sauce? ^)
Thanks, unfortunately setting these values doesn’t work in some or many cases, like the current raid10 recovery I’m doing which is only running at 15 – 25 MB/sec. With a 1 drive add to the raid10 I’d have expected close to the max HDD write speed (100 MB/sec or more) but nope.
First of all, you should never do RAID 10 with Software mdadm. If you do, the choice of 10 or 01 is chosen by the software, hence if somehow the configuration is lost, you will lose the array.
Before these mods, i was getting slow rebuild speed:
finish=3175.7min speed=11,066K/sec
After mods and making sure nothing else was using the disks.
finish=315.8min speed=110,349K/sec
🙂 yay!
awesome!
These proc values are in Kbps already! Not in bps!
root@lian:etc# echo 2000 > /proc/sys/dev/raid/speed_limit_max
root@lian:etc# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : active raid1 sde5[2] sdd5[1]
15621042624 blocks super 1.2 [2/1] [_U]
[>………………..] recovery = 1.4% (222551808/15621042624) finish=121376.5min speed=2113K/sec
root@lian:etc# echo 200000 > /proc/sys/dev/raid/speed_limit_max
root@lian:etc# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : active raid1 sde5[2] sdd5[1]
15621042624 blocks super 1.2 [2/1] [_U]
[>………………..] recovery = 1.4% (233290048/15621042624) finish=1895.6min speed=135287K/sec
These proc values are in Kbps already! Not in bps!
“`
root@lian:etc# echo 2000 > /proc/sys/dev/raid/speed_limit_max
root@lian:etc# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : active raid1 sde5[2] sdd5[1]
15621042624 blocks super 1.2 [2/1] [_U]
[>………………..] recovery = 1.4% (222551808/15621042624) finish=121376.5min speed=2113K/sec
“`
“`
root@lian:etc# echo 200000 > /proc/sys/dev/raid/speed_limit_max
root@lian:etc# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md125 : active raid1 sde5[2] sdd5[1]
15621042624 blocks super 1.2 [2/1] [_U]
[>………………..] recovery = 1.4% (233290048/15621042624) finish=1895.6min speed=135287K/sec
“`
These proc values are in Kbps already! Not in bps!