Network connectivity problems when running LXC (with veth) in VMware VM

Written by - 6 comments

Published on - last updated on March 22nd 2023 - Listed in VMware Linux Virtualization Network LXC


And then there was this saying "you never stop learning". Indeed.

I've been using VMware since 2005 and Linux containers (LXC) since 2013 and have managed hundreds of machines, VM's running on VMware ESX/i and containers. But I had yet to combine the two of them and run containers on a VMware VM. And promptly I ran into problems.

Let me first explain the (basic) setup:

VMware VM1 (.51)          VMware VM2 (.52)
    |_ lxc1 (.53)              |_ lxc3 (.54)   
    |_ lxc2 (.55)              |_ lxc4 (.56)

There is one interface configured on the VM which is declared as virbr0 and is being used by the containers as link. The VM itself has its IP address configured on the virbr0, so there is this communication as well. The virtual switch was already configured to allow multiple MAC addresses behind an interface (see Problems on LXC in same network as its host which is a VMware VM).

Connectivity tests using ping

The VMs have no connectivity issue at all and are able to communicate correctly with each other. However as soon as the connection should go to (or from) an LXC container, there are connectivity issues.

Ping tests from VM's (lxc hosts):

VM1 -> VM2  : success
VM2 -> VM1  : success
VM1 -> lxc1 : fail (12 packets transmitted, 0 received, 100% packet loss, time 11087ms)
VM2 -> lxc1 : fail (12 packets transmitted, 0 received, 100% packet loss, time 10999ms)
VM1 -> lxc2 : success (12 packets transmitted, 12 received, 0% packet loss, time 10999ms)
VM2 -> lxc2 : success (12 packets transmitted, 12 received, 0% packet loss, time 10997ms)
VM1 -> lxc3 : fail (12 packets transmitted, 0 received, 100% packet loss, time 11086ms)
VM2 -> lxc3 : fail (12 packets transmitted, 2 received, 83% packet loss, time 11070ms)
VM1 -> lxc4 : fail (13 packets transmitted, 0 received, 100% packet loss, time 12095ms)
VM2 -> lxc4 : fail (13 packets transmitted, 4 received, 69% packet loss, time 11998ms)

This is very surprising. Even the VMs' which is the host of its containers gets packet losses on them. An exception is lxc2. For some reason the connectivity there works.

arp -a shows the following:

VM1:
? (192.168.253.56) at 00:16:3e:8f:5d:39 [ether] on virbr0
? (192.168.253.52) at 00:50:56:99:06:b8 [ether] on virbr0
? (192.168.253.53) at 00:16:3e:bf:72:12 [ether] on virbr0
? (192.168.253.54) at 00:16:3e:ef:70:14 [ether] on virbr0
? (192.168.253.55) at 00:16:3e:bf:76:71 [ether] on virbr0

VM2:
? (192.168.253.54) at 00:16:3e:ef:70:14 [ether] on virbr0
? (192.168.253.51) at 00:50:56:99:55:89 [ether] on virbr0
? (192.168.253.56) at 00:16:3e:8f:5d:39 [ether] on virbr0
? (192.168.253.53) at 00:16:3e:bf:72:12 [ether] on virbr0
? (192.168.253.55) at 00:16:3e:bf:76:71 [ether] on virbr0

So the arp resolving worked...

A completely different view is when the pings are launched from within a container (tested on lxc1 and lxc3). Here we even get ping errors:

lxc1 -> VM1 : success
lxc3 -> VM1 : fail (12 packets transmitted, 0 received, +12 errors, 100% packet loss, time 10999ms)
lxc1 -> VM2 : success
lxc3 -> VM2 : success
lxc1 -> lxc3: fail (12 packets transmitted, 0 received, +12 errors, 100% packet loss, time 11055ms)
lxc3 -> lxc1: fail (12 packets transmitted, 0 received, 100% packet loss, time 11086ms)
lxc1 -> lxc2: fail (12 packets transmitted, 0 received, 100% packet loss, time 11063ms)
lxc3 -> lxc2: fail (12 packets transmitted, 0 received, 100% packet loss, time 10999ms)
lxc1 -> lxc4: fail (12 packets transmitted, 0 received, +12 errors, 100% packet loss, time 11004ms)
lxc3 -> lxc4: success

The arp table on the containers lxc1 and lxc3 showed incomplete entries:

lxc1:
? (192.168.253.56) at on eth0
? (192.168.253.51) at 00:50:56:99:55:89 [ether] on eth0
? (192.168.253.52) at 00:50:56:99:06:b8 [ether] on eth0
? (192.168.253.55) at 00:16:3e:bf:76:71 [ether] on eth0
? (192.168.253.54) at 00:16:3e:ef:70:14 [ether] on eth0

lxc3:
? (192.168.253.53) at 00:16:3e:bf:72:12 [ether] on eth0
? (192.168.253.52) at 00:50:56:99:06:b8 [ether] on eth0
? (192.168.253.51) at on eth0
? (192.168.253.56) at 00:16:3e:8f:5d:39 [ether] on eth0
? (192.168.253.55) at 00:16:3e:bf:76:71 [ether] on eth0

I expected an issue with the arp announcements from the host (VM) to the network. According to the kernel documentation of arp_accept and arp_notify I did another set of tests where I set both values to 1:

sysctl -w net.ipv4.conf.all.arp_accept=1
sysctl -w net.ipv4.conf.all.arp_notify=1

But this didn't change anything either. There were still packet losses, however some LXC's responded to the ping. Meaning that communication worked.
I also saw very strange results where I launched another set of ping tests and the first few pings failed, then suddenly the ping worked. After I waited a few seconds and re-launched the same ping, again packet losses. Call that a stable network communication...

Switch from veth to macvlan

After a lot of troubleshooting, it seems that it actually has nothing to do with the kernel settings for arp but that this is rather a limitation of ESXi to accept bridges within the guest vm. After a while, I finally found some important information in an old LXC-Users mailing list post.
The author of the post, Olivier Mauras, mentions that he needed to use MACVLAN (instead of veth) virtual interfaces for the containers and that he needed to configure two interfaces on the LXC Host (the vm). The first nic is used for the VM communication itself, the second nic is used for the bridge.

Good hint. Let's try it with MACVLAN interfaces then.
The original configuration used veth interfaces and used virbr0 (the vm's one and only nic) as link:

grep network /var/lib/lxc/lxc3/config
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.ipv4 = 192.168.168.54/24
lxc.network.hwaddr = 00:16:3e:ef:70:54
lxc.network.ipv4.gateway = 192.168.168.1

I added a new virtual NIC to the VM and edited /etc/network/interfaces in the VM. The virtual bridge should be using eth1 (the new added nic).

# The secondary network interface is used as bridge
iface eth1 inet manual

# Virtual bridge
auto virbr1
iface virbr1 inet manual
    bridge_ports eth1
    # bridge control
    pre-up brctl addbr virbr1
    pre-down brctl delbr virbr1

Then I modified the LXC configurations to use the macvlan interface type and define macvlan as bridge modus and use the newly created virbr1:

grep network /var/lib/lxc/lxc3/config
lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge

lxc.network.flags = up
lxc.network.link = virbr1
lxc.network.ipv4 = 192.168.168.54/24
lxc.network.hwaddr = 00:16:3e:ef:70:54
lxc.network.ipv4.gateway = 192.168.168.1

After a reboot of the VMs, make the ping tests again:

VM1 -> VM2  : success
VM2 -> VM1  : success
VM1 -> lxc1 : success
VM2 -> lxc1 : success
VM1 -> lxc2 : success
VM2 -> lxc2 : success
VM1 -> lxc3 : success
VM2 -> lxc3 : success
VM1 -> lxc4 : success
VM2 -> lxc4 : success

From the VMs it looks good - all containers were pingable. Now ping from inside the containers:

lxc1 -> VM1  : success
lxc1 -> VM2  : success
lxc3 -> VM1  : success
lxc3 -> VM2  : success
lxc1 -> lxc3 : success
lxc3 -> lxc1 : success
lxc1 -> lxc2 : success
lxc3 -> lxc2 : success
lxc1 -> lxc4 : success
lxc3 -> lxc4 : success

TL;DR: Do not use veth inside VMware VM

Finally!! All containers are now reachable and the communications between them work as it should.

Now I should get a tattoo on my arm saying "Do not use veth interfaces when running Linux Containers on VMware".

Do not use veth interfaces on LXC in VM

That will spare me of several hours troubleshooting next time.


Add a comment

Show form to leave a comment

Comments (newest first)

Tourman from South Pole wrote on Apr 21st, 2018:

Thanks so much! saved me days worth of troubleshooting. I tried the Promiscuous mode at first and was really confused as to why that didn't fix it.


Andrea from Bologna (IT) wrote on Oct 19th, 2016:

Thank you !
You saved my life it was 3 days I was struggling with erratic network behaviour.
I just wonder how you can ping the host from the guest, as far as I know with macvlan the host can't see the guests.
Also I had to create the macvlan "bridge" with ip link add link eth0 br0 type macvlan
which it seems not to be supported by centos/rhel initscripts. You seem to create a regular bridge, but lxc revfused to start using a regular bridge together with lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge

thanks again !


ck from Switzerland wrote on Sep 5th, 2016:

Hello Dariusz. Yes, they were both enabled. Actually all three security settings were enabled on the virtual switch, as described in the article: "(see Problems on LXC in same network as its host which is a VMware VM)."


Dariusz Z.K. from Poland wrote on Sep 5th, 2016:

Have you tried to enable 'mac spoofing' and 'promiscous mode' on VM virtual switch?


ck from Switzerland wrote on Dec 11th, 2015:

Hello Pete. Occasionally (but rarely) I also hit problems with macvlans after a recreation of a container. In this case I think that fixed mac addresses in the lxc config could help. Assigning a physical interface will of course work, too. But in my environment, additional containers can be deployed automatically and this would require manual intervention (by adding a new interface to the VM).


Pete from Ottawa wrote on Dec 11th, 2015:

I spent several hours fighting with the configure you listed with no success.

In the end added a NIC to the VM for each container and then passed them through with:

lxc.network.type = phys
lxc.network.link = eth1

Hoping this finds someone who can't get this to work.

Thanks.


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   OpenSearch   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder