A couple of days ago I built a new home server for testing purposes with the following components:
Just for documentation purposes, here's what the final result looks like:
For the setup of Debian Jessie, I created a bootable memory stick with the netinst image. Setup started smoothly (missing firmware was mentioned on the WIFI chip iwlwifi-7620-9/iwlwifi-7620-8 and on RTL rtl_nic/rtl8168e-3), but the network card worked fine with the default drivers) and I went through the different steps. I set up my partitions like this:
I wanted to get on but the Debian installer informed me that "You haven't set up an EFI partition".
Oh jeez, right. That's a new motherboard which supports (U)EFI boot. I needed to read some docs what exactly was meant with a EFI partition and how to set it up but the Debian installer pretty much does the job automatically when the EFI System Partition (ESP) is selected. But this meant I needed to destroy my partition layout and create a 500MB (250MB would probably be enough) partition for ESP at the begin.
The new partition layout with the UEFI partition looked like this:
The Debian installer continued with the remaining steps of installing the base system and eventually finished. Time to reboot. But at that point I made big eyes - the system didn't boot. Right after the UEFI/BIOS of the motherboard, the following error was shown:
GRUB Loading stage1.5.
GRUB loading, please wait...
Error 17
According to http://www.uruk.org/orig-grub/errors.html the Error 17 means:
17: Invalid device requested
This error is returned if a device string is recognizable but does not fall under the other device errors.
For some reason it looks like the UEFI loader can't find the EFI System Partition.
Some additional research revealed the following very important information (from https://wiki.debian.org/UEFI):
RAID for the EFI System Partition
This is arguably a mis-design in the UEFI specification - the ESP is a single point of failure on one disk. For systems with hardware RAID, that will provide some backup in case of disk failure. But for software RAID systems there is currently no support for putting the ESP on two separate disks in RAID. There might be a way to do something useful with fallback options, but this will need some investigation...
What the %"*+&@??!! Are you seriously telling me that the, compared to BIOS, 20 years newer Unified Extensible Firmware Interface (UEFI) cannot boot from a EFI System Partition which is a software raid partition? This is not arguably a mis-design, this is clearly a no-go! Even for a testserver, I don't want to invest any time in making the system boot again when a HDD fails. I could set up the partitions /dev/sda1 + /dev/sdb1 as normal ESP partitions and maybe run a cronjob to sync them manually, but that's a hack/workaround. I finally decided to ditch UEFI and switch to the Legacy (BIOS) boot mode:
Of course I needed to adapt the partition layout once again; I used the layout I used at the begin. I let Debian finish the installation and then reboot. This time booting (with legacy BIOS mode and GRUB as bootloader) worked like a charm.
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder