You have a cloud server and you got more than one public IP addresses. How do you get those additional IP addresses to associate to specific LXD containers? That is, how do you get your LXD container to use a public IP address?
This post has been tested with a packet.net baremetal server.
Prerequisites
You have configured a cloud server and you arranged to have at least one additional public IP address.
In the following, we assume that
- the gateway of your cloud server is 100.100.100.97
- the unused public IP address is 100.100.100.98
- the network is 100.100.100.96/29
- the default network interface on the host is enp0s100 (if you have a bonded interface, the name would be something like bond0)
Creating a macvlan LXD profile
Create a new LXD profile and set up a macvlan interface. The name of the interface in the container will be eth0, the nictype is macvlan and the parent points to the default network interface on the host.
$ lxc profile create macvlan $ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp0s100
Here is how the profile macvlan looks like.
ubuntu@myserver:~$ lxc profile show macvlan config: {} description: "" devices: eth0: nictype: macvlan parent: enp0s100 type: nic name: macvlan used_by:
Launching the container
Launch the container by specifying the macvlan profile on top (stacked) of the default profile. The container is called c1public.
$ lxc launch --profile default --profile macvlan ubuntu:18.04 c1public
Get a shell into the container and view the network interfaces
ubuntu@myserver:~$ lxc exec c1public bash root@c1public:~# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link> ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet) RX packets 82 bytes 5200 (5.2 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16 bytes 2788 (2.7 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 .... root@c1public:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 8: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:16:3e:55:19:30 brd ff:ff:ff:ff:ff:ff link-netnsid 0
At this stage, we can configure manually the appropriate public IP address for the network interface eth0 of the container and it will work. If you are familiar with /etc/network/interfaces, you can go ahead and make the static network configuration. In the next section we are going to see how to use netplan to configure the network.
Configuring the public IP with netplan
In the container, create a file /etc/netplan/50-static-public-ip.yaml so that it as follows. There are two options for the renderer, networkd (systemd-networkd which is available on Ubuntu 18.04) and NetworkManager. We then specify the public IP address, the gateway and finally the DNS server IP addresses. You may want to replace the DNS server with that of your cloud provider.
root@c1public:~# cat /etc/netplan/50-static-public-ip.yaml network: version: 2 renderer: networkd ethernets: eth0: dhcp4: no dhcp6: no addresses: - 100.100.100.98/29 gateway4: 100.100.100.97 nameservers: addresses: - 8.8.8.8
Applying the netplan network configuration
Run the following command to apply the netplan network configuration. Alternatively, you can restart the container.
root@c1public:~# netplan --debug apply ** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-cloud-init.yaml.. ** (generate:294): DEBUG: 15:46:19.174: starting new processing pass ** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-static-public-ip.yaml.. ** (generate:294): DEBUG: 15:46:19.174: starting new processing pass ** (generate:294): DEBUG: 15:46:19.174: eth0: setting default backend to 1 ** (generate:294): DEBUG: 15:46:19.175: Generating output files.. ** (generate:294): DEBUG: 15:46:19.175: NetworkManager: definition eth0 is not for us (backend 1) DEBUG:netplan generated networkd configuration exists, restarting networkd DEBUG:no netplan generated NM configuration exists DEBUG:device lo operstate is unknown, not replugging DEBUG:netplan triggering .link rules for lo DEBUG:device eth0 operstate is up, not replugging DEBUG:netplan triggering .link rules for eth0 root@c1public:~#
Here is the network interface with the new IP address,
root@c1public:~# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 100.100.100.98 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link> ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet) RX packets 489 bytes 30168 (30.1 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18 bytes 1356 (1.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ... root@c1public:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 0 0 0 eth0 100.100.100.97 0.0.0.0 255.255.255.240 U 0 0 0 eth0 root@c1public:~# ping -c 3 www.ubuntu.com PING www.ubuntu.com (91.189.89.118) 56(84) bytes of data. 64 bytes from www-ubuntu-com.nuno.canonical.com (91.189.89.118): icmp_seq=1 ttl=53 time=8.10 ms 64 bytes from www-ubuntu-com.nuno.canonical.com (91.189.89.118): icmp_seq=2 ttl=53 time=8.77 ms 64 bytes from www-ubuntu-com.nuno.canonical.com (91.189.89.118): icmp_seq=3 ttl=53 time=9.81 ms --- www.ubuntu.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 8.106/8.896/9.810/0.701 ms root@c1public:~#
Testing the public IP address
Let’s test that the public IP address of the LXD container works. We install nginx and modify a bit the default HTML page.
ubuntu@c1public:~$ sudo apt update ... ubuntu@c1public:~$ sudo apt install nginx ... ubuntu@c1public:~$ cat /var/www/html/index.nginx-debian.html <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> ubuntu@c1public:~$ sudo sed -i 's/to nginx/to nginx running in a LXD container with public IP address/g' /var/www/html/index.nginx-debian.html ubuntu@c1public:~$
Let’s visit the public IP address with our browser!

It worked!
Troubleshooting
Help! I can see the IP address but there is no route?!?
Most likely you misconfigured the network prefix in the netplan configuration file. Find the details at
ubuntu@myserver:~$ sudo apt install ipcalc ubuntu@myserver:~$ ipcalc 100.100.100.96/29 Address: 100.100.100.96 01100100.01100100.01100100.01100 000 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 100.100.100.96/29 01100100.01100100.01100100.01100 000 HostMin: 100.100.100.97 01100100.01100100.01100100.01100 001 HostMax: 100.100.100.102 01100100.01100100.01100100.01100 110 Broadcast: 100.100.100.103 01100100.01100100.01100100.01100 111 Hosts/Net: 6 Class A
The public IP addresses have the range 100.100.100.[97-102]. Both the gateway (100.100.100.97) and the LXD container public IP address (100.100.100.98) are in this range, therefore all are fine.
12 comments
Skip to comment form
Hey Simos,
thank you for this great article! Do you know about a way hat the LXC guests and LXC host can communicate via a network? Currently I’m able to reach all other host, LXC and non LXC hosts, in my network 192.168.178.0/24, but not the LXC host my LXC guest are running on.
Author
Hi!
It is a feature of macvlan when the host and the containers cannot communicate with each other. It has to do with the implementation of macvlan in the Linux kernel. Some users like this feature because it is a free facility that isolates the host from the containers. It is good for security.
However, if you would rather have the host and the containers to communicate with each other over the network, you can do so using a bridge instead. See this post for details,
https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
Hi Simos
Thanks for this great blog. I was configured my server like you tell us on this blog and everything works fine. But I have an issue with my public ips
Let me explain my situation. I have a pool of 5 ip publics Distributed as follows
1IP for Host (Ubuntu 18.04) e.g 100.100.100.97
4IPs for each container eg (100.100.100.98 – 81)
I have plenty access to each container and does not have problems with connecting each others. My Big problem is with The host IP. any container can’t connect or ping to the host (IP 100.100.100.97) and the other side either
Something weird that I’ve noticed with the command arp is that can complete MAC in the table. and mi firewall does not the problem.
root@host:~# arp -n
Address HWtype HWaddress Flags Mask Iface
100.100.100.98 (incomplete) bond-wan
100.100.100.99 (incomplete) bond-wan
100.100.100.96 ether ac:64:62:e1:ee:52 C bond-wan
Do you know what can be wrong?. I appreciate any help that you can provide me.
Best Regards
Darwin
Author
Hi Darwin,
It is a feature of macvlan to isolate the host from the containers. Some users prefer it in terms of security.
However, in your case, you can configure public IP address in the containers if you use a bridge instead of macvlan. In that way, the containers and the host will be accessible.
See more at
https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
Hi Simos,
Thank you for your quick response. Everything works like I expected when I use bridge mode. You have an excellent blog to guide us.
Just one more thing. I will like to make backups and snapshots to my containers and then restore when that will be necessary. Do you know how to do that?
Thank you in advance.
Darwin Lemoine
i tried step by step the instructions here, but still the container does not have internet “connection”… I have a single network card with 2 public ip’s (hetzner) (i have the necessary edits in the sysctl.conf) and i follow that guide over here. Lxd/Lxc installed with snap. (in the same server running plesk, with imunify360)
Author
Hi!
Each virtualization environment has some security mechanism when assigning public IP addresses to the same network interface.
In the case of Hetzner, you need to arrange for a MAC address for the container that will get the second public IP address. This is done through the Hetzner management interface.
The documentation about this issue is at the very end of this page, https://wiki.hetzner.de/index.php/Netzkonfiguration_Debian/en#Bridged
Thank you simos, you are very fast…. Well i seen in the robot: request separate Mac. If i will make that request and follow again your guide do i have chances for success? — well i see that link from the wiki… Unfortunately im running ubuntu 18.04 and it uses netplan for the networking. im a bit confused because i feel comfort with netplan. Anyway thank you again, we all thank you helping us and for your work 😀
i send a reply to what you said, i dont know what happend.
Well, In the “Robot” im seeing the option: Request Separate MAC. If i do that, and follow again from the start your guide, do i have any chances for success? — I have seen also that link that you posted. I think my current networking configuration uses netplan and not the legacy way. I feel confort with netplan. Iffff you dont mind, and you have the will and time, can you please convert that from the url for netplan?
Author
The reply got caught in the spam filter ;-(.
I think I use netplan already above. What do you mean with legacy?
I have an ubuntu 18.04 server and I can’t configure public ip in LXD 3.03 with macvlan, example network this:
Host: 32.25.128.17
netmask 255.255.255.224
gateway 32.25.128.1
Public IP (to configure in 4 containers)
4 static ip 114.107.83.16 – 19 / 255.255.255.255
Not work public ip in the container
Author
Most likely it is an issue with your Internet provider.
The IP address belongs to AT&T. Try to find documentation on how to use your static IP addresses with them. Is your server located at your premises and you have ordered four static IP addresses?