Configuring public IP addresses on cloud servers for LXD containers

You have a cloud server and you got more than one public IP addresses. How do you get those additional IP addresses to associate to specific LXD containers? That is, how do you get your LXD container to use a public IP address?

This post has been tested with a baremetal server.


You have configured a cloud server and you arranged to have at least one additional public IP address.

In the following, we assume that

  • the gateway of your cloud server is
  • the unused public IP address is
  • the network is
  • the default network interface on the host is enp0s100 (if you have a bonded interface, the name would be something like bond0)

Creating a macvlan LXD profile

Create a new LXD profile and set up a macvlan interface. The name of the interface in the container will be eth0, the nictype is macvlan and the parent points to the default network interface on the host.

$ lxc profile create macvlan
$ lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp0s100

Here is how the profile macvlan looks like.

ubuntu@myserver:~$ lxc profile show macvlan
config: {}
description: ""
    nictype: macvlan
    parent: enp0s100
    type: nic
name: macvlan

Launching the container

Launch the container by specifying the macvlan profile on top (stacked) of the default profile. The container is called c1public.

$ lxc launch --profile default --profile macvlan ubuntu:18.04 c1public

Get a shell into the container and view the network interfaces

ubuntu@myserver:~$ lxc exec c1public bash
root@c1public:~# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link>
        ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet)
        RX packets 82 bytes 5200 (5.2 KB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 16 bytes 2788 (2.7 KB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@c1public:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
8: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:55:19:30 brd ff:ff:ff:ff:ff:ff link-netnsid 0

At this stage, we can configure manually the appropriate public IP address for the network interface eth0 of the container and it will work. If you are familiar with /etc/network/interfaces, you can go ahead and make the static network configuration. In the next section we are going to see how to use netplan to configure the network.

Configuring the public IP with netplan

In the container, create a file /etc/netplan/50-static-public-ip.yaml so that it as follows. There are two options for the renderer, networkd (systemd-networkd which is available on Ubuntu 18.04) and NetworkManager. We then specify the public IP address, the gateway and finally the DNS server IP addresses. You may want to replace the DNS server with that of your cloud provider.

root@c1public:~# cat /etc/netplan/50-static-public-ip.yaml
  version: 2
  renderer: networkd
      dhcp4: no
      dhcp6: no

Applying the netplan network configuration

Run the following command to apply the netplan network configuration. Alternatively, you can restart the container.

root@c1public:~# netplan --debug apply
** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-cloud-init.yaml..
** (generate:294): DEBUG: 15:46:19.174: starting new processing pass
** (generate:294): DEBUG: 15:46:19.174: Processing input file //etc/netplan/50-static-public-ip.yaml..
** (generate:294): DEBUG: 15:46:19.174: starting new processing pass
** (generate:294): DEBUG: 15:46:19.174: eth0: setting default backend to 1
** (generate:294): DEBUG: 15:46:19.175: Generating output files..
** (generate:294): DEBUG: 15:46:19.175: NetworkManager: definition eth0 is not for us (backend 1)
DEBUG:netplan generated networkd configuration exists, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:device lo operstate is unknown, not replugging
DEBUG:netplan triggering .link rules for lo
DEBUG:device eth0 operstate is up, not replugging
DEBUG:netplan triggering .link rules for eth0

Here is the network interface with the new IP address,

root@c1public:~# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet netmask broadcast
        inet6 fe80::216:3eff:fe55:1930 prefixlen 64 scopeid 0x20<link>
        ether 00:16:3e:55:19:30 txqueuelen 1000 (Ethernet)
        RX packets 489 bytes 30168 (30.1 KB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 18 bytes 1356 (1.3 KB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@c1public:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway UG 0 0 0 eth0 U 0 0 0 eth0
root@c1public:~# ping -c 3
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=53 time=8.10 ms
64 bytes from ( icmp_seq=2 ttl=53 time=8.77 ms
64 bytes from ( icmp_seq=3 ttl=53 time=9.81 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 8.106/8.896/9.810/0.701 ms

Testing the public IP address

Let’s test that the public IP address of the LXD container works. We install nginx and modify a bit the default HTML page.

ubuntu@c1public:~$ sudo apt update
ubuntu@c1public:~$ sudo apt install nginx
ubuntu@c1public:~$ cat /var/www/html/index.nginx-debian.html 
<!DOCTYPE html>
<title>Welcome to nginx!</title>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=""></a>.<br/>
Commercial support is available at
<a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>
ubuntu@c1public:~$ sudo sed -i 's/to nginx/to nginx running in a LXD container with public IP address/g' /var/www/html/index.nginx-debian.html 

Let’s visit the public IP address with our browser!

It worked!


Help! I can see the IP address but there is no route?!?

Most likely you misconfigured the network prefix in the netplan configuration file. Find the details at

ubuntu@myserver:~$ sudo apt install ipcalc
ubuntu@myserver:~$ ipcalc
Address: 01100100.01100100.01100100.01100 000
Netmask: = 29 11111111.11111111.11111111.11111 000
Wildcard: 00000000.00000000.00000000.00000 111
Network: 01100100.01100100.01100100.01100 000
HostMin: 01100100.01100100.01100100.01100 001
HostMax: 01100100.01100100.01100100.01100 110
Broadcast: 01100100.01100100.01100100.01100 111
Hosts/Net: 6 Class A

The public IP addresses have the range 100.100.100.[97-102]. Both the gateway ( and the LXD container public IP address ( are in this range, therefore all are fine.

Permanent link to this article:


Skip to comment form

    • Szop on March 31, 2019 at 16:35
    • Reply

    Hey Simos,

    thank you for this great article! Do you know about a way hat the LXC guests and LXC host can communicate via a network? Currently I’m able to reach all other host, LXC and non LXC hosts, in my network, but not the LXC host my LXC guest are running on.

    1. Hi!

      It is a feature of macvlan when the host and the containers cannot communicate with each other. It has to do with the implementation of macvlan in the Linux kernel. Some users like this feature because it is a free facility that isolates the host from the containers. It is good for security.

      However, if you would rather have the host and the containers to communicate with each other over the network, you can do so using a bridge instead. See this post for details,

    • Darwin Lemoine on August 21, 2019 at 21:44
    • Reply

    Hi Simos

    Thanks for this great blog. I was configured my server like you tell us on this blog and everything works fine. But I have an issue with my public ips

    Let me explain my situation. I have a pool of 5 ip publics Distributed as follows

    1IP for Host (Ubuntu 18.04) e.g
    4IPs for each container eg ( – 81)

    I have plenty access to each container and does not have problems with connecting each others. My Big problem is with The host IP. any container can’t connect or ping to the host (IP and the other side either

    Something weird that I’ve noticed with the command arp is that can complete MAC in the table. and mi firewall does not the problem.

    root@host:~# arp -n
    Address HWtype HWaddress Flags Mask Iface (incomplete) bond-wan (incomplete) bond-wan ether ac:64:62:e1:ee:52 C bond-wan

    Do you know what can be wrong?. I appreciate any help that you can provide me.

    Best Regards

    1. Hi Darwin,

      It is a feature of macvlan to isolate the host from the containers. Some users prefer it in terms of security.
      However, in your case, you can configure public IP address in the containers if you use a bridge instead of macvlan. In that way, the containers and the host will be accessible.
      See more at

        • Darwin Lemoine on August 25, 2019 at 20:14
        • Reply

        Hi Simos,

        Thank you for your quick response. Everything works like I expected when I use bridge mode. You have an excellent blog to guide us.

        Just one more thing. I will like to make backups and snapshots to my containers and then restore when that will be necessary. Do you know how to do that?

        Thank you in advance.

        Darwin Lemoine

  1. i tried step by step the instructions here, but still the container does not have internet “connection”… I have a single network card with 2 public ip’s (hetzner) (i have the necessary edits in the sysctl.conf) and i follow that guide over here. Lxd/Lxc installed with snap. (in the same server running plesk, with imunify360)

    1. Hi!

      Each virtualization environment has some security mechanism when assigning public IP addresses to the same network interface.

      In the case of Hetzner, you need to arrange for a MAC address for the container that will get the second public IP address. This is done through the Hetzner management interface.

      The documentation about this issue is at the very end of this page,

        • Κωνσταντίνος Γιαγκίδης on June 1, 2020 at 21:34
        • Reply

        Thank you simos, you are very fast…. Well i seen in the robot: request separate Mac. If i will make that request and follow again your guide do i have chances for success? — well i see that link from the wiki… Unfortunately im running ubuntu 18.04 and it uses netplan for the networking. im a bit confused because i feel comfort with netplan. Anyway thank you again, we all thank you helping us and for your work 😀

    • Κωνσταντίνος Γιαγκίδης on June 1, 2020 at 22:12
    • Reply

    i send a reply to what you said, i dont know what happend.

    Well, In the “Robot” im seeing the option: Request Separate MAC. If i do that, and follow again from the start your guide, do i have any chances for success? — I have seen also that link that you posted. I think my current networking configuration uses netplan and not the legacy way. I feel confort with netplan. Iffff you dont mind, and you have the will and time, can you please convert that from the url for netplan?

    1. The reply got caught in the spam filter ;-(.

      I think I use netplan already above. What do you mean with legacy?

    • Claudio Guzman on October 6, 2020 at 05:55
    • Reply

    I have an ubuntu 18.04 server and I can’t configure public ip in LXD 3.03 with macvlan, example network this:


    Public IP (to configure in 4 containers)
    4 static ip – 19 /

    Not work public ip in the container

    1. Most likely it is an issue with your Internet provider.

      The IP address belongs to AT&T. Try to find documentation on how to use your static IP addresses with them. Is your server located at your premises and you have ordered four static IP addresses?

    • 512YiB on March 10, 2021 at 10:22
    • Reply

    Hello, Simos.
    Tried this way, as you have adviced. I ordered new clean VPS with 2 IP to try. Unfortunately nothing worked. ip -a shows correct IP, but I can’t even ping from inside the container.
    Maybe it’s because I have only one MAC? Unfortunately, hosting company doesn’t allow to order separete MAC for IP.

    1. Setting up more than one public IP address on a cloud server is a complex process. You would need to look into the documentation of the cloud provider on how to setup those IP addresses, whether they require you to register the additional MAC addresses that you will be using.

      Which cloud provider are you using? What is their documentation page for multiple IP addresses?

    • 512YiB on March 10, 2021 at 16:45
    • Reply

    Thank you for answering me, Simos.
    For this project I need only ukrainian IPs.
    Currently I’m using and they don’t have documentation at all, they gave me link to third-party forum.
    Neither, nor, nor provides additional MAC for IP.
    Maybe I don’t need such complicated solution at all?
    What am I trying to do: each user must have it’s own public IP, all traffic must go through unique public IP. My idea is to connect each user’s PC to VPN which leads to it’s own LXC with it’s own IP.

    1. If the requirement is to really have different public IP addresses per customer, then you would need to do that. You could always create a separate VPS per customer but this incurs extra cost.

      There are only a few ways to implement additional IP addresses on a VPS. Therefore, if the VPS company is unable to support you but still offer the service to buy additional public IP addresses, then you can do this.

      One way to implement public IP addresses is with IP aliasing. That’s how they do it with Linode and here are instructions with them. Try to follow their guide and see if it works with your VPS provider,

      What you would need to do, is setup the IP aliases to enable the second IP address. Then, try to ping from your own computer to both the IP addresses. If they respond, it’s working and you can next figure out how to assign to a container.

    • 512YiB on March 12, 2021 at 14:01
    • Reply

    Simos, thank you.
    Yes, I want to save money, because I need a lot of containers. 70+% of the weakest VPS power will be wasted, but I still will have to pay for it.
    I’ve already done same as in, but what’s next?
    I’ve tried:

    iptables -t nat -A PREROUTING -p tcp -d $EXTERNAL_IP -j DNAT --to-destination $CONTAINER_IP
    iptables -t nat -A POSTROUTING -p tcp -d $CONTAINER_IP -j SNAT --to-source $EXTERNAL_IP
    iptables -t nat -A PREROUTING -p udp -d $EXTERNAL_IP -j DNAT --to-destination $CONTAINER_IP
    iptables -t nat -A POSTROUTING -p udp -d $CONTAINER_IP -j SNAT --to-source $EXTERNAL_IP
    iptables -t nat -A POSTROUTING -o enp35s0 -j MASQUERADE

    and it worked, I can ssh via those IPs, but each container itself seems still to use host’s primary IP as a default.

    Also, I have another similar task, but on Hetzner. Here I’ve ordered separate MACs for IPs. Done as in your manual, but something is wrong, — I have IP at eth0, but Network is unreacheble.
    I used primary IP as a gateway for container, am I right?

    • 512YiB on March 12, 2021 at 14:28
    • Reply

    Yes, containers really use host’s primary IP as default.
    links always gives me host’s primary IP, no additional ones.

    1. You should be able to use those additional public IP addresses through IP aliases, per this way,

      I have not tested this though because the trend with the VPS companies is not to offer additional IPv4 IP addresses. You can’t get them DigitalOcean, nor Scaleway.

    • David Lee on April 8, 2021 at 16:38
    • Reply

    Hello, Simos.

    How can I configuring public IP addresses on servers for LXC containers (ubuntu20.04 ), Hope to get your help, thank you very much!

    1. Hi David!

      First of all, there is difference between LXC and LXD. I think you mean LXD here, which is easier to use. Read about the differences at Note that both LXC and LXD are developed by the same team!

      There are several ways to give a public IP address to a container.

      First, check whether you really need to give a public IP address to a container. Most cloud providers do not give out easily extra IP addresses, and there are techniques to avoid the need for extra public IP addresses. If you really need a public IP address to a container, read on.

      Second, some cloud providers only accept a single MAC (Ethernet) address coming from your cloud server. Other cloud providers are OK with multiple MAC addresses. In the first case, you can use ipvlan or routed. In the second case you can use macvlan or public bridge. There are tutorials for all four cases. Start at

      Third, if you can share the name of the cloud provider, I can have a look at what is available to them.

    • Sidou on September 20, 2021 at 21:36
    • Reply

    Do the container and the host have to be in the same range?
    I tried your config and I still can’t make it work.
    My host’s public IP is something like 47.xx.xx.82 and it has a DNS server (which is setup to allow requests from the container’s public IP by the way) and the public IP I want to use for the container is like 216.xx.xx.12.
    I do have an lxd bridge profile that runs the other containers just fine through the local class 10.3.xx.xx. They communicate with each other and with the host and also have access to internet but I can’t figure out how to configure the network for that particular container that must be visible to the outside world with a public IP through the macvlan profile.
    Any idea?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.