How to find the active (U)EFI partition and sync partition to secondary SSD in Debian Linux

Written by - 0 comments

Published on - Listed in Linux Hardware


One of the biggest shortcomings of UEFI is the lack of RAID support for these EFI partitions. I had to find this out myself when I built a lab server back in 2017.

Fast-forward 8 years later, it's now 2025 (and sadly the world is not in a better place). New lab server. This time I'm going with UEFI as modern motherboards often (unfortunately) lack the BIOS option "Legacy Boot" or something similar.

New lab server with Debian 12

Debian 12 with RAID-1 partition setup

The lab server consists of the following hardware:

  • Intertech 2U rack chassis
  • Asus Pro A620M-DASH-CSM motherboard
  • AMD Ryzen 9 7900 CPU
  • 64 GB DDR5-RAM
  • 2 x 1 TB SATA SSD from Western Digital

Debian 12 (Bookworm) was installed with the following partition layout:

SSD1: 1TB Western Digital Blue

  • Partition 1: 200MB, used as: EFI Partition (sda1)
  • Partition 2: 30GB, used as: RAID partition (sda2)
  • Partition 3: 30 GB, used as: RAID partition (sda3)
  • Partition 4: remaining space, used as: RAID partition (sda4)

SSD2: 1TB Western Digital Red

  • Partition 1: 200MB, used as: EFI Partition (sdb1)
  • Partition 2: 30GB, used as: RAID partition (sdb2)
  • Partition 3: 30 GB, used as: RAID partition (sdb3)
  • Partition 4: remaining space, used as: RAID partition (sdb4)

Software RAID

  • /dev/md0: RAID-1 using sda2 + sdb2
  • /dev/md1: RAID-1 using sda3 + sdb3
  • /dev/md2: RAID-1 using sda4 + sdb4

Note: EFI partitions can't be used in software raid, as mentioned at the beginning!

LVM Setup

  • Volume Group "vgsystem" using /dev/md1 as Physical Volume
  • Volume Group "vgdata" using /dev/md2 as Physical Volume
  • Logical Volume "lvswap" in VG "vgsystem", 8GB
  • Logical Volume "lvvar" in VG "vgsystem", 4GB
  • Logical Volume "lvtmp" in VG "vgsystem", 2GB

Filesystems

  • /dev/md0, mounted on "/", formatted as ext4, 5% reserve
  • LV "lvswap" mounted as swap
  • LV "lvvar" mounted on "/var", formatted as ext4, 0% reserve
  • LV "lvtmp" mounted on "/tmp", formatted as ext4, 1% reserve

Running server

Once the Debian setup was completed, the server spun up quick and fast, as expected. 

RAID sync completed after a few minutes:

root@debian ~ # cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda4[0] sdb4[1]
      917840896 blocks super 1.2 [2/2] [UU]
      bitmap: 1/7 pages [4KB], 65536KB chunk

md1 : active raid1 sdb3[1] sda3[0]
      29279232 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda2[0] sdb2[1]
      29279232 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Filesystems were correctly mounted as expected from the partitioning setup:

root@debian ~ # df -t ext4
Filesystem                    Type  Size  Used Avail Use% Mounted on
/dev/md0                      ext4   28G  2.3G   24G   9% /
/dev/mapper/vgsystem-lvvar    ext4  3.6G  514M  3.1G  15% /var
/dev/mapper/vgsystem-lvtmp    ext4  1.8G  400K  1.8G   1% /tmp

Everything seems good so far. Except one todo point still sits in my brain: The EFI partition!

UEFI redundancy (kind of)

If one of the two SSD's decides to make my life harder, the server should still survive due to the RAID-1 setup. However the EFI partition is not in a RAID, remember?

So if by bad luck the SSD with the EFI data dies, the server won't be able to boot. This means I need to make sure that the data from the active EFI partition is synced to the second SSD.

To find out which SSD was used by the Debian setup to install data into the first (EFI) partition, we can check the mount points:

root@debian ~ # mount|grep efi
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)

In this case /dev/sda1 is clearly the one used as (working) EFI partition. To make sure the server is able to boot if this SSD dies, we can copy the whole partition to the second SSD:

root@debian ~ # dd if=/dev/sda1 of=/dev/sdb1

This transfers the blocks of the whole 200M partition from SSD1 to SSD2.

To my current understanding, this should be enough as a one-time action.

Let me know in the comments if there's a better approach. Or maybe Debian 12 automatically detects and installs the UEFI boot information in both EFI partitions - however a manual comparison with "dd" output showed different amount of data.


More recent articles:

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Observability   Office   OpenSearch   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder