I'm currently expanding my local NAS, which runs on a HP N40L micro server (yes, it still runs!) and with Debian as Operating System. The NAS is primarily used as media backup without many write operations.
There are four 3.5 inch drives in this micro server and I'm replacing each drive with a larger one, followed by expanding the (mdadm) raid. See this article for more information on how to replace the drives and expand the mdadm raid.
Note: Always make sure to use CMR drives (not SMR drives!) in a RAID setup.
This post is not very technical (see above link for details) but more about surprisingly huge speed differences of the different drives while rebuilding the raid array.
First I replaced an old Seagate ST3000VN000-1HJ166 3TB drive with a newer Toshiba N300 (HDWG460) 6TB drive. The raid recovery happened with a pretty fast speed:
root@nas:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[3] sda1[2]
2930264519 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sdd1[3] sdc1[2]
2930134272 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.0% (537536/2930134272) finish=272.4min speed=179178K/sec
unused devices: <none>
According to the kernel log, it took 5 hours and ~50min to fully recover the 2.7TB md1 array:
Apr 22 15:42:13 nas kernel: [ 547.773285] md: recovery of RAID array md1
Apr 22 15:42:13 nas kernel: [ 547.773288] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Apr 22 15:42:13 nas kernel: [ 547.773291] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Apr 22 15:42:13 nas kernel: [ 547.773297] md: using 128k window, over a total of 2930134272k.
Apr 22 21:35:55 nas kernel: [21771.495736] md: md1: recovery done.
Then it was time to replace the second drive in that array. The older Toshiba DT01ACA300 3TB drive was replaced with a newer Seagate Skyhawk (ST6000VX001-2BD186) 6TB drive. And already at the start of the raid recovery, I could see a big difference in the speed:
root@nas:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdc1[2] sdd1[3]
2930134272 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.0% (99264/2930134272) finish=983.7min speed=49632K/sec
md0 : active raid1 sdb1[3] sda1[2]
2930264519 blocks super 1.2 [2/2] [UU]
unused devices: <none>
It took the Seagate drive much longer (almost 24h) to be synced:
Apr 23 07:18:51 nas kernel: [ 125.633162] md: bind<sdc1>
Apr 23 07:18:51 nas kernel: [ 125.654644] md: recovery of RAID array md1
Apr 23 07:18:51 nas kernel: [ 125.654649] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Apr 23 07:18:51 nas kernel: [ 125.654651] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Apr 23 07:18:51 nas kernel: [ 125.654657] md: using 128k window, over a total of 2930134272k.
Apr 24 06:52:57 nas kernel: [84976.664487] md: md1: recovery done.
Yes, there's a major difference in the rotational speed between the two new drives:
Why am I surprised? Actually I'm not surprised on a technical level, but more on an economical level. Both drives cost more or less the same when I bought them online. As I am writing this, the Seagate is even a few bucks more expensive (at the same online shop).
How Toshiba is able to pull this off, I don't know. But hey, their drives work and I've seen with the previous Toshiba drives in my NAS, that they are resilient (definitely not worse than Western Digital or Seagate).
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder