UEFI boot does not like software raid at all (GRUB Error 17)

Written by - 0 comments

Published on - Listed in Linux Hardware Rant


A couple of days ago I built a new home server for testing purposes with the following components:

  • Chassis: BitFenix Phenom Mini ITX
  • Power Supply: Enermax Revolution X't II 550W
  • Motherboard: Gigabyte F2A88XN-WIFI (although I didn't care about the WIFI, important to me was the FM2+ socket)
  • CPU: AMD A10-7860K
  • Memory: 2 x 8GB Corsair Vengeance Pro DDR3
  • Hard Drives: 2 x 500 GB Western Digital Caviar Blue (yes they're slow)

Just for documentation purposes, here's what the final result looks like:

Debian Jessie Testserver Gigabyte F2A88XN-WIFI and AMD A10-7860K

For the setup of Debian Jessie, I created a bootable memory stick with the netinst image. Setup started smoothly (missing firmware was mentioned on the WIFI chip iwlwifi-7620-9/iwlwifi-7620-8 and on RTL rtl_nic/rtl8168e-3), but the network card worked fine with the default drivers) and I went through the different steps. I set up my partitions like this:

  • HDD1: 3 partitions (20G, 25G, max size), each selected to be used as a RAID device
  • HDD2: 3 partitions (20G, 25G, max size), each selected to be used as a RAID device
  • Raid Device 1: /dev/sda1 + /dev/sdb1, RAID-1, ext4 defaults, mounted as /
  • Raid Device 2: /dev/sda2 + /dev/sdb2, RAID-1, Use as LVM physical volume
  • Raid Device 3: /dev/sda3 + /dev/sdb3, RAID-1, Use as LVM physical volume
  • PV /dev/md1 used for VG vgsystem. Created three LV's lvvar (mounted on /var), lvtmp (mounted on /tmp), and lvswap as swap partition
  • PV /dev/md2 used for VG vgdata.

I wanted to get on but the Debian installer informed me that "You haven't set up an EFI partition".

Debian Installer missing EFI System Partition

Oh jeez, right. That's a new motherboard which supports (U)EFI boot. I needed to read some docs what exactly was meant with a EFI partition and how to set it up but the Debian installer pretty much does the job automatically when the EFI System Partition (ESP) is selected. But this meant I needed to destroy my partition layout and create a 500MB (250MB would probably be enough) partition for ESP at the begin.

The new partition layout with the UEFI partition looked like this:

  • HDD1: 4 partitions (500M, 20G, 25G, max size), each selected to be used as a RAID device
  • HDD2: 4 partitions (500M, 20G, 25G, max size), each selected to be used as a RAID device
  • Raid Device 1: /dev/sda1 + /dev/sdb2, RAID-1, Use as EFI System Partition
  • Raid Device 2: /dev/sda2 + /dev/sdb2, RAID-1, ext4 defaults, mounted as /
  • Raid Device 3: /dev/sda3 + /dev/sdb3, RAID-1, Use as LVM physical volume
  • Raid Device 4: /dev/sda4 + /dev/sdb4, RAID-1, Use as LVM physical volume
  • PV /dev/md2 used for VG vgsystem. Created three LV's lvvar (mounted on /var), lvtmp (mounted on /tmp), and lvswap as swap partition
  • PV /dev/md3 used for VG vgdata.
Debian Partitioning with EFI System Partition

The Debian installer continued with the remaining steps of installing the base system and eventually finished. Time to reboot. But at that point I made big eyes - the system didn't boot. Right after the UEFI/BIOS of the motherboard, the following error was shown:

GRUB Loading stage1.5.
GRUB loading, please wait...
Error 17

GRUB Error 17 UEFI Boot

According to http://www.uruk.org/orig-grub/errors.html the Error 17 means:

17: Invalid device requested
 This error is returned if a device string is recognizable but does not fall under the other device errors.

For some reason it looks like the UEFI loader can't find the EFI System Partition.
Some additional research revealed the following very important information (from https://wiki.debian.org/UEFI): 

RAID for the EFI System Partition
This is arguably a mis-design in the UEFI specification - the ESP is a single point of failure on one disk. For systems with hardware RAID, that will provide some backup in case of disk failure. But for software RAID systems there is currently no support for putting the ESP on two separate disks in RAID. There might be a way to do something useful with fallback options, but this will need some investigation...

What the %"*+&@??!! Are you seriously telling me that the, compared to BIOS, 20 years newer Unified Extensible Firmware Interface (UEFI) cannot boot from a EFI System Partition which is a software raid partition? This is not arguably a mis-design, this is clearly a no-go! Even for a testserver, I don't want to invest any time in making the system boot again when a HDD fails. I could set up the partitions /dev/sda1 + /dev/sdb1 as normal ESP partitions and maybe run a cronjob to sync them manually, but that's a hack/workaround. I finally decided to ditch UEFI and switch to the Legacy (BIOS) boot mode:

Gigabyte UEFI BIOS set to legacy boot

Of course I needed to adapt the partition layout once again; I used the layout I used at the begin. I let Debian finish the installation and then reboot. This time booting (with legacy BIOS mode and GRUB as bootloader) worked like a charm.


Add a comment

Show form to leave a comment

Comments (newest first)

No comments yet.

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder