Back in June I did my first server tests with Debian 11 (Bullseye). Back then with Release Candidate 1, I realized something's off when using bonded interfaces, using 802.3ad or LACP bonding.
Basically the same bond interface config was taken from a Debian 10 (Buster) machine:
cka@buster:~$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# First interface enp3s0f0
auto enp3s0f0
iface enp3s0f0 inet manual
bond-master bond0
# Second interface enp3s0f1
auto enp3s0f1
iface enp3s0f1 inet manual
bond-master bond0
# Frontend bond interface
auto bond0
iface bond0 inet static
address 192.168.100.4/24
gateway 192.168.100.1
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
The bond interface is in this case defined in the physical interface stanzas: Both enp3s0f0 and enp3s0f1 interfaces point to a bond-master interface, bond0 in this case.
But using this config does not work in Debian 11!
Trying slaves (interfaces) config (from official documentation)
The official Debian bonding documentation mentions to define the physical interfaces on the bonding interface itself using the slaves keyword:
auto enp3s0f0
iface enp3s0f0 inet manual
auto enp3s0f1
iface enp3s0f1 inet manual
auto bond0
iface bond0 inet static
address 192.168.100.4/24
gateway 192.168.100.1
slaves enp3s0f0 enp3s0f1
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
But this config does not work on Bullseye, either!
By testing multiple configuration options, I finally was able to get LACP working with the following config:
auto enp4s0f0
iface enp4s0f0 inet manual
bond-master bond0
auto enp4s0f1
iface enp4s0f1 inet manual
bond-master bond0
auto bond0
iface bond0 inet static
address 192.168.100.4/24
gateway 192.168.100.1
bond-slaves none
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
Here both the bond-master interface is defined in the physical interfaces, yet on the bond interface itself bond-slaves is set to none.
This kinda worked - but it still is a weird workaround mixing up master and slave configuration.
Hence I opened up Debian bug report #990428, mainly because of two facts:
My hope was obviously that this would be fixed before the official Debian 11 Bullseye release. But here we are, a few months later with Debian 11.0 released and with the same bug still in place :-(.
In the meantime Oleander Reis shared some insights why this happens. Debian 11 comes with a newer version (2.12) of the ifenslave package. This package contains the code which configures interface slaves to a bond interface. Unfortunately this newer version introduces a couple of issues concerning interface bonding, failing on "bond-master" configurations, for example:
Sep 01 15:58:23 bullseye ifup[1251]: No iface stanza found for master bond0
Sep 01 15:58:23 bullseye ifup[1249]: run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1
Sep 01 15:58:23 bullseye ifup[1242]: ifup: failed to bring up enp3s0f0
Sep 01 15:58:23 bullseye ifup[1256]: No iface stanza found for master bond0
Sep 01 15:58:23 bullseye ifup[1254]: run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1
Sep 01 15:58:23 bullseye ifup[1242]: ifup: failed to bring up enp3s0f1
Another important hint from Oleander was to remove the physical interface stanzas from /etc/network/interfaces (only keep bond interfaces).
Using this information from Oleander leads to the following config:
root@bullseye:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# Frontend bond interface
auto bond0
iface bond0 inet static
address 192.168.100.4/24
gateway 192.168.100.1
bond-slaves enp3s0f0 enp3s0f1
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 1
Note: There are no physical interfaces (enp3s0f0 and enp3s0f1) defined in the interfaces file!
This works surprisingly well, and systemctl status networking shows everything OK:
root@bullseye:~# systemctl status networking
- networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2021-09-01 16:07:52 CEST; 50min ago
Docs: man:interfaces(5)
Process: 688 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
Main PID: 688 (code=exited, status=0/SUCCESS)
CPU: 135ms
Sep 01 16:07:51 bullseye systemd[1]: Starting Raise network interfaces...
Sep 01 16:07:52 bullseye systemd[1]: Finished Raise network interfaces.
And the bond interface is correctly up, which can be checked using /proc/net/bonding/bond0:
root@bullseye:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.0-8-amd64
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: XX:XX:XX:XX:XX:XX
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 9
Partner Key: 32787
Partner Mac Address: XX:XX:XX:XX:XX:XX
Slave Interface: enp3s0f0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: XX:XX:XX:XX:XX:XX
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: XX:XX:XX:XX:XX:XX
port key: 9
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 32667
system mac address: XX:XX:XX:XX:XX:XX
oper key: 32787
port priority: 32768
port number: 16659
port state: 61
Slave Interface: enp3s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: XX:XX:XX:XX:XX:XX
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: XX:XX:XX:XX:XX:XX
port key: 9
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 32667
system mac address: XX:XX:XX:XX:XX:XX
oper key: 32787
port priority: 32768
port number: 275
port state: 61
Update October 17th 2021: The mentioned Debian bug #990428 was closed today and mentions that the bug has been fixed in the package ifenslave 2.13. The following changes are made to the new package version:
Right now, the current ifenslave version on Bullseye is still 2.12, but 2.13 should be available in the next few days.
Update January 4th 2022: Even though Debian 11.2 (Bullseye point release 2) is meanwhile released, the ifenslave package still does not contain the fixes. As Olivier mentioned in the comments (see below this article), the fixed package can (so far) only be found in Debian testing.
Debian's point release 11.5 was announced on September 10th 2022 and finally contains the long-awaited ifenslave bug fix:
The changelog now contains the above mentioned fixes, back-ported to Bullseye:
ifenslave (2.13~deb11u1) bullseye; urgency=medium
* Rebuild for bullseye
* Revert "Bump Standards-Version to 4.6.0 (no changed needed)"
-- Salvatore Bonaccorso <carnil@debian.org> Sat, 25 Jun 2022 09:45:37 +0200
Jacek from wrote on Mar 25th, 2022:
If someone has problem with seting up bonding with VLANs after reboot on Debian 11, try using (installing) ifupdown2 instead of default ifupdown.
In my case installing "ifenslave" 2.13 from testing repo didn't change nothing, but changing to ifupdown2 allow me to use simple (and working) /etc/network/interfaces config for bonding of vlan interfaces.
Changes in ifupdown[2]: https://docs.nvidia.com/networking-ethernet-software/knowledge-base/Configuration-and-Usage/Network-Interfaces/Compare-ifupdown2-with-ifupdown/
My case and solution: https://unix.stackexchange.com/questions/696582/bonding-with-vlan-and-bridge-on-debian-11/696804#696804
Alex from Stuttgart wrote on Jan 24th, 2022:
I had always success with bonding, when i changed the network device "PREDICTABLE" name to the traditional naming style 'eth0' or 'eth1' !
here it is explained very well: https://www.itzgeek.com/how-tos/linux/debian/change-default-network-name-ens33-to-old-eth0-on-debian-9.html
Olivier from France wrote on Nov 14th, 2021:
Nice work ! This is the perfect answer to my current concerns.
Even the Debian wiki shows a weird configuration, not working.
Updating to bullseye on my production server was a mess... but not anymore. Many thanks to you.
P.S: unfortunately the current stable version is 2.12 in bullseye. The testing version is needed.
Clay from wrote on Oct 30th, 2021:
Thanks for submitting the bug report and providing this excellent write-up. Really helped me out after I ran into this on a Debian 10 -> 11 upgrade, having missed it in the upgrade documentation. Thank you.
b1001101 from wrote on Oct 18th, 2021:
I stumbled across your article while I was investigating a bizarre behaviour of bonded (and, in my case, bridged) interfaces on one of my boxes; although it turned out to be something entirely unrelated, I thank you for emphasizing a couple of points I integrated in my configuration for which I thank you. No issue whatsoever since its original configuration (Debian 10) lacking the physical interface stanzas (fun fact: when defined, the individual interfaces ended up getting an unsought address from isc-dhcpd-server). Anyway, in the event it can be useful to anyone, this is me (Debian 11, ifenslave 2.12)
auto bond0
iface bond0 inet manual
bond-mode 4
bond-primary eno1
bond-slaves eno1 enp2s0
bond-miimon 100
bond-lacp-rate 1
auto br0
iface br0 inet static
address 192.168.16.16/24
gateway 192.168.16.1
dns-nameservers 192.168.16.32
dns-search whatever.lan
bridge_ports bond0
bridge_stp off
bridge_waitport 0
bridge_fd 0
No issue whatsoever since Debian 10. Keep up the good work, mate!
ck from Switzerland wrote on Oct 12th, 2021:
Hi Luka. Unfortunately I do not know why this setup would not work anymore. I suggest to ask the mailing list (debian-user@lists.debian.org). Maybe VLAN configuration has also changed in Bullseye.
Luka from Slovakia wrote on Oct 11th, 2021:
I have tried your example, but it is not still working. :( I have checked networking status:
"ifup: Failed to enslave eno1.100 to bond0.Is bond0 ready and a bonding interface?"
Thanks.
ck from Switzerland wrote on Oct 11th, 2021:
If I remember correctly, each VLAN needs a different bonding interface, tagged with a VLAN. For example for VLAN100:
auto bond0
iface bond0 inet manual
bond-slaves eno1.100 eno2.100
auto bond0.100
iface bond0.100 inet static
address x.x.x.x
netmask 255.255.255.0
gateway x.x.x.x
vlan-raw-device bond0
Something like this. I did not test the config above.
Lukas Mastilak from Slovakia wrote on Oct 10th, 2021:
Sorry, I removed the configuration of physical interfaces from /etc/network/interfaces. In my setup, I have two physical interfaces and two vlans. For example:
auto bond0
iface bond0 inet static
bond-slaves eno1.100 eno2.100
address x.x.x.x
netmask 255.255.255.0
gateway x.x.x.x
My notes:
1. without VLANs the networking is working after reboot
2. with VLANs, when I reboot server, the network is not working. I must add physical interfaces into /etc/network/interfaces and restart networking. Then I will again remove physical interfaces from config and restart networking. Subsequently, the networking already is working.
Also, I checked module bonding if it is loaded after reboot.
ck from Switzerland wrote on Oct 10th, 2021:
Side note, make sure that the "bonding" Kernel module is loaded during boot (initramfs). Maybe this is missing in your setup?
ck from Switzerland wrote on Oct 10th, 2021:
Luka, how do you want to make a bonding interface without physical interfaces? The config I shared in the post works and survives a reboot. Working correctly with Debian Bullseye on HP DL380 server (4 on board nics).
Luka from Slovakia wrote on Oct 10th, 2021:
Hi, I have tried to configure bond interface without physical interfaces. After the reboot system, this config is not working. :(
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder