There are a couple of excellent mdadm cheat sheets on the Internet. If you have them in your bookmarks, you're (mostly) safe. I personally have used them for over a decade - because who can remember the correct syntax if you only use mdadm once in every couple of months, right?
However today I ran into a problem when I wanted to create a new RAID-1 array with two NVMe drives:
root@bullseye:~# mdadm --create --verbose /dev/md0 --level=1 /dev/nvme0n1p1 /dev/nvme1n1p1
mdadm: no raid-devices specified.
I've used the command in the exact same way as in the cheat sheet. And I know this has worked in the past. Has something changed in mdadm since?
By looking closer as the mdadm help, the following syntax is now shown:
root@bullseye:~# mdadm --create --help | head
Usage: mdadm --create device --chunk=X --level=Y --raid-devices=Z devices
This usage will initialise a new md array, associate some
devices with it, and activate the array. In order to create an
array with some devices missing, use the special word 'missing' in
place of the relevant device name.
Before devices are added, they are checked to see if they already contain
raid superblocks or filesystems. They are also checked to see if
the variance in device size exceeds 1%.
A new parameter --raid-devices now shows up in the help. Further down in the help output, this parameter is explained:
--raid-devices= -n : number of active devices in array
As this is a RAID-1 with 2 active raid devices, the mdadm command is adjusted accordingly:
root@bullseye:~# mdadm --create --verbose /dev/md0 --raid-devices=2 --level=1 /dev/nvme0n1p1 /dev/nvme1n1p1
mdadm: /dev/nvme0n1p1 appears to be part of a raid array:
level=raid0 devices=2 ctime=Wed Jan 17 15:15:36 2018
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/nvme1n1p1 appears to be part of a raid array:
level=raid0 devices=2 ctime=Wed Jan 17 15:15:36 2018
mdadm: size set to 781279040K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Note: mdadm detected existing raid metadata on the drives. These partitions were previously used as a RAID-0 array and I didn't remove the superblock before creating a new array (check the cheat sheet for more info). This does not happen on completely new drives.
After this, the new RAID-1 device /dev/md0 is created and the status can be checked:
root@bullseye:~# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 nvme1n1p1[1] nvme0n1p1[0]
781279040 blocks super 1.2 [2/2] [UU]
[===>.................] resync = 16.4% (128675328/781279040) finish=50.7min speed=214358K/sec
bitmap: 6/6 pages [24KB], 65536KB chunk
unused devices: <none>
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder