One of the biggest shortcomings of UEFI is the lack of RAID support for these EFI partitions. I had to find this out myself when I built a lab server back in 2017.
Fast-forward 8 years later, it's now 2025 (and sadly the world is not in a better place). New lab server. This time I'm going with UEFI as modern motherboards often (unfortunately) lack the BIOS option "Legacy Boot" or something similar.
The lab server consists of the following hardware:
Debian 12 (Bookworm) was installed with the following partition layout:
SSD1: 1TB Western Digital Blue
SSD2: 1TB Western Digital Red
Software RAID
Note: EFI partitions can't be used in software raid, as mentioned at the beginning!
LVM Setup
Filesystems
Once the Debian setup was completed, the server spun up quick and fast, as expected.
RAID sync completed after a few minutes:
root@debian ~ # cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda4[0] sdb4[1]
917840896 blocks super 1.2 [2/2] [UU]
bitmap: 1/7 pages [4KB], 65536KB chunk
md1 : active raid1 sdb3[1] sda3[0]
29279232 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda2[0] sdb2[1]
29279232 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Filesystems were correctly mounted as expected from the partitioning setup:
root@debian ~ # df -t ext4
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 ext4 28G 2.3G 24G 9% /
/dev/mapper/vgsystem-lvvar ext4 3.6G 514M 3.1G 15% /var
/dev/mapper/vgsystem-lvtmp ext4 1.8G 400K 1.8G 1% /tmp
Everything seems good so far. Except one todo point still sits in my brain: The EFI partition!
If one of the two SSD's decides to make my life harder, the server should still survive due to the RAID-1 setup. However the EFI partition is not in a RAID, remember?
So if by bad luck the SSD with the EFI data dies, the server won't be able to boot. This means I need to make sure that the data from the active EFI partition is synced to the second SSD.
To find out which SSD was used by the Debian setup to install data into the first (EFI) partition, we can check the mount points:
root@debian ~ # mount|grep efi
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
In this case /dev/sda1 is clearly the one used as (working) EFI partition. To make sure the server is able to boot if this SSD dies, we can copy the whole partition to the second SSD:
root@debian ~ # dd if=/dev/sda1 of=/dev/sdb1
This transfers the blocks of the whole 200M partition from SSD1 to SSD2.
To my current understanding, this should be enough as a one-time action.
Let me know in the comments if there's a better approach. Or maybe Debian 12 automatically detects and installs the UEFI boot information in both EFI partitions - however a manual comparison with "dd" output showed different amount of data.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder