Configuring public IP addresses on cloud servers for LXD containers

You have a cloud server and you got more than one public IP addresses. How do you get those additional IP addresses to associate to specific LXD containers? That is, how do you get your LXD container to use a public IP address?

This post has been tested with a baremetal server.


You have configured a cloud server and you arranged to have at least one additional public IP address.

In the following, we assume that

  • the gateway of your cloud server is
  • the unused public IP address is
  • the network is
  • the default network interface on the host is enp0s100 (if you have a bonded interface, the name would be something like bond0)

Creating a macvlan LXD profile

Create a new LXD profile and set up a macvlan interface. The name of the interface in the container will be eth0, the nictype is macvlan and the parent points to the default network interface on the host.

$ lxc profile create macvlan
$ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp0s100

Here is how the profile macvlan looks like.

ubuntu@myserver:~$ lxc profile show macvlan
config: {}
description: ""
    nictype: macvlan
    parent: enp0s100
    type: nic
name: macvlan

Launching the container

Launch the container by specifying the macvlan profile on top (stacked) of the default profile. The container is called c1public.

$ lxc launch --profile default --profile macvlan ubuntu:18.04 c1public

Get a shell into the container and view the network interfaces

ubuntu@myserver:~$ lxc exec c1public bash
root@c1public:~# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link>
        ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet)
        RX packets 82 bytes 5200 (5.2 KB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 16 bytes 2788 (2.7 KB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@c1public:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
8: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:55:19:30 brd ff:ff:ff:ff:ff:ff link-netnsid 0

At this stage, we can configure manually the appropriate public IP address for the network interface eth0 of the container and it will work. If you are familiar with /etc/network/interfaces, you can go ahead and make the static network configuration. In the next section we are going to see how to use netplan to configure the network.

Configuring the public IP with netplan

In the container, create a file /etc/netplan/50-static-public-ip.yaml so that it as follows. There are two options for the renderer, networkd (systemd-networkd which is available on Ubuntu 18.04) and NetworkManager. We then specify the public IP address, the gateway and finally the DNS server IP addresses. You may want to replace the DNS server with that of your cloud provider.

root@c1public:~# cat /etc/netplan/50-static-public-ip.yaml
  version: 2
  renderer: networkd
      dhcp4: no
      dhcp6: no

Applying the netplan network configuration

Run the following command to apply the netplan network configuration. Alternatively, you can restart the container.

root@c1public:~# netplan --debug apply
** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-cloud-init.yaml..
** (generate:294): DEBUG: 15:46:19.174: starting new processing pass
** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-static-public-ip.yaml..
** (generate:294): DEBUG: 15:46:19.174: starting new processing pass
** (generate:294): DEBUG: 15:46:19.174: eth0: setting default backend to 1
** (generate:294): DEBUG: 15:46:19.175: Generating output files..
** (generate:294): DEBUG: 15:46:19.175: NetworkManager: definition eth0 is not for us (backend 1)
DEBUG:netplan generated networkd configuration exists, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:device lo operstate is unknown, not replugging
DEBUG:netplan triggering .link rules for lo
DEBUG:device eth0 operstate is up, not replugging
DEBUG:netplan triggering .link rules for eth0

Here is the network interface with the new IP address,

root@c1public:~# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet netmask broadcast
        inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link>
        ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet)
        RX packets 489 bytes 30168 (30.1 KB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 18 bytes 1356 (1.3 KB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@c1public:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway UG 0 0 0 eth0 U 0 0 0 eth0
root@c1public:~# ping -c 3
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=53 time=8.10 ms
64 bytes from ( icmp_seq=2 ttl=53 time=8.77 ms
64 bytes from ( icmp_seq=3 ttl=53 time=9.81 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 8.106/8.896/9.810/0.701 ms

Testing the public IP address

Let’s test that the public IP address of the LXD container works. We install nginx and modify a bit the default HTML page.

ubuntu@c1public:~$ sudo apt update
ubuntu@c1public:~$ sudo apt install nginx
ubuntu@c1public:~$ cat /var/www/html/index.nginx-debian.html 
<!DOCTYPE html>
<title>Welcome to nginx!</title>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>
ubuntu@c1public:~$ sudo sed -i 's/to nginx/to nginx running in a LXD container with public IP address/g' /var/www/html/index.nginx-debian.html 

Let’s visit the public IP address with our browser!

It worked!


Help! I can see the IP address but there is no route?!?

Most likely you misconfigured the network prefix in the netplan configuration file. Find the details at

ubuntu@myserver:~$ sudo apt install ipcalc
ubuntu@myserver:~$ ipcalc
Address: 01100100.01100100.01100100.01100 000
Netmask: = 29 11111111.11111111.11111111.11111 000
Wildcard: 00000000.00000000.00000000.00000 111
Network: 01100100.01100100.01100100.01100 000
HostMin: 01100100.01100100.01100100.01100 001
HostMax: 01100100.01100100.01100100.01100 110
Broadcast: 01100100.01100100.01100100.01100 111
Hosts/Net: 6 Class A

The public IP addresses have the range 100.100.100.[97-102]. Both the gateway ( and the LXD container public IP address ( are in this range, therefore all are fine.

Permanent link to this article:


Skip to comment form

    • Szop on March 31, 2019 at 16:35
    • Reply

    Hey Simos,

    thank you for this great article! Do you know about a way hat the LXC guests and LXC host can communicate via a network? Currently I’m able to reach all other host, LXC and non LXC hosts, in my network, but not the LXC host my LXC guest are running on.

    1. Hi!

      It is a feature of macvlan when the host and the containers cannot communicate with each other. It has to do with the implementation of macvlan in the Linux kernel. Some users like this feature because it is a free facility that isolates the host from the containers. It is good for security.

      However, if you would rather have the host and the containers to communicate with each other over the network, you can do so using a bridge instead. See this post for details,

    • Darwin Lemoine on August 21, 2019 at 21:44
    • Reply

    Hi Simos

    Thanks for this great blog. I was configured my server like you tell us on this blog and everything works fine. But I have an issue with my public ips

    Let me explain my situation. I have a pool of 5 ip publics Distributed as follows

    1IP for Host (Ubuntu 18.04) e.g
    4IPs for each container eg ( – 81)

    I have plenty access to each container and does not have problems with connecting each others. My Big problem is with The host IP. any container can’t connect or ping to the host (IP and the other side either

    Something weird that I’ve noticed with the command arp is that can complete MAC in the table. and mi firewall does not the problem.

    root@host:~# arp -n
    Address HWtype HWaddress Flags Mask Iface (incomplete) bond-wan (incomplete) bond-wan ether ac:64:62:e1:ee:52 C bond-wan

    Do you know what can be wrong?. I appreciate any help that you can provide me.

    Best Regards

    1. Hi Darwin,

      It is a feature of macvlan to isolate the host from the containers. Some users prefer it in terms of security.
      However, in your case, you can configure public IP address in the containers if you use a bridge instead of macvlan. In that way, the containers and the host will be accessible.
      See more at

        • Darwin Lemoine on August 25, 2019 at 20:14
        • Reply

        Hi Simos,

        Thank you for your quick response. Everything works like I expected when I use bridge mode. You have an excellent blog to guide us.

        Just one more thing. I will like to make backups and snapshots to my containers and then restore when that will be necessary. Do you know how to do that?

        Thank you in advance.

        Darwin Lemoine

  1. i tried step by step the instructions here, but still the container does not have internet “connection”… I have a single network card with 2 public ip’s (hetzner) (i have the necessary edits in the sysctl.conf) and i follow that guide over here. Lxd/Lxc installed with snap. (in the same server running plesk, with imunify360)

    1. Hi!

      Each virtualization environment has some security mechanism when assigning public IP addresses to the same network interface.

      In the case of Hetzner, you need to arrange for a MAC address for the container that will get the second public IP address. This is done through the Hetzner management interface.

      The documentation about this issue is at the very end of this page,

        • Κωνσταντίνος Γιαγκίδης on June 1, 2020 at 21:34
        • Reply

        Thank you simos, you are very fast…. Well i seen in the robot: request separate Mac. If i will make that request and follow again your guide do i have chances for success? — well i see that link from the wiki… Unfortunately im running ubuntu 18.04 and it uses netplan for the networking. im a bit confused because i feel comfort with netplan. Anyway thank you again, we all thank you helping us and for your work 😀

    • Κωνσταντίνος Γιαγκίδης on June 1, 2020 at 22:12
    • Reply

    i send a reply to what you said, i dont know what happend.

    Well, In the “Robot” im seeing the option: Request Separate MAC. If i do that, and follow again from the start your guide, do i have any chances for success? — I have seen also that link that you posted. I think my current networking configuration uses netplan and not the legacy way. I feel confort with netplan. Iffff you dont mind, and you have the will and time, can you please convert that from the url for netplan?

    1. The reply got caught in the spam filter ;-(.

      I think I use netplan already above. What do you mean with legacy?

    • Claudio Guzman on October 6, 2020 at 05:55
    • Reply

    I have an ubuntu 18.04 server and I can’t configure public ip in LXD 3.03 with macvlan, example network this:


    Public IP (to configure in 4 containers)
    4 static ip – 19 /

    Not work public ip in the container

    1. Most likely it is an issue with your Internet provider.

      The IP address belongs to AT&T. Try to find documentation on how to use your static IP addresses with them. Is your server located at your premises and you have ordered four static IP addresses?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: