How to get LXD containers obtain IP from the LAN with ipvlan networking

You are using LXD containers and you want a container (or more) to use an IP address from the LAN (or, get an IP address just like the host does).

LXD currently supports four ways to do that, and depending on your needs, you select the appropriate way.

  1. Using macvlan. See https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
  2. Using bridged. See https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
  3. Using routed. See https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/
  4. Using ipvlan. It is this tutorial, you are reading it now.

Why use the ipvlan networking?

You would use the ipvlan networking if you want to expose containers to the local network (LAN, or the Internet if you are using an Internet server, and have allocated several public IPs).

Any containers with ipvlan will appear on the network to have the MAC address of the host. Therefore, this will work even when you use it on your laptop that is connected to the network over WiFi (or any router with port security). That is, you can use ipvlan when macvlan and bridged cannot work.

You have to use static network configuration for these containers. Which means,

  1. You need to make sure that the IP address on the network that you give to the ipvlan container, will not be assigned by the router in the future. Otherwise, there will be an IP conflict. You can do so if you go into the configuration of the router, and specify that the IP address is in use.
  2. The container (i.e. the services running in the container) should not be performing changes to the network interface.

If you use some special Linux distribution, you can verify whether your LXD installation supports ipvlan by running the following command:

$ lxc info
...
api_extensions:
...
- container_nic_ipvlan
- container_nic_ipvlan_gateway
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
...
  lxc_features:
    network_ipvlan: "true"
...
$

Special requirements for container images

The default network configuration in Ubuntu 18.04 or newer is to use netplan and get eth0 to use DHCP for the configuration. The way netplan does this, messes up with ipvlan, so we are using a workaround. Depending on the Linux distribution in the container, you may need special configuration. The Ubuntu workaround is based on cloud-init, so it is the whole section for cloud-init in the profile below. Below is the list of LXD profiles per Linux distribution in the container image.

  1. Ubuntu container images
  2. CentOS container images
  3. Debian container images

ipvlan LXD profile for Ubuntu container images

Here is the ipvlan profile, which has been tested on Ubuntu. Create a profile with this name. Then, for each container that uses the ipvlan network, we will be creating a new individual profile based on this initial profile. The reason why we create such individual profiles, is that we need to hard-code the IP address in them. Below, in bold, you can see the values that changes, specifically, the IP address (in two locations, replace with your own public IP addresses), the parent interface (on the host), and the nameserver IP address (that one is a public DNS server from Google). You can create an empty profile, then edit it and replace the existing content with the following (lxc profile create ipvlan, lxc profile edit ipvlan).

config:
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
      eth0:
        addresses:
          - 192.168.1.200/32
        dhcp4: no
        dhcp6: no
        nameservers:
          addresses: [8.8.8.8, 1.1.1.1]
        routes:
         - to: 0.0.0.0/0
           via: 169.254.0.1
           on-link: true
description: "ipvlan LXD profile"
devices:
  eth0:
    ipv4.address: 192.168.1.200
    nictype: ipvlan
    parent: enp3s0
    type: nic
name: ipvlan
used_by:

We are going to make copies of the ipvlan profile to individual new ones, one for each IP address. Therefore, let’s create the LXD profiles for 192.168.1.200 and 192.168.1.201. When you edit them

$ lxc profile copy ipvlan ipvlan_192.168.1.200
$ EDITOR=nano lxc profile edit ipvlan_192.168.1.200
$ lxc profile copy ipvlan ipvlan_192.168.1.201
$ EDITOR=nano lxc profile edit ipvlan_192.168.1.201

Skip to the next main section to test the profile.

ipvlan LXD profile for Debian container images

The following is an alternative LXD ipvaln profile that can be used on Debian 10 (buster). It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Debian, please report it below so that I can update the post. It explicitly makes the container not to set network configuration through DHCP. It further uses cloud-init instructions to manually create a /etc/resolv.conf because without DHCP there wouldn’t be such a file in the container. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through ipvlan). Note that without the dhcp4: false instruction in the following, the container will take a minute or two until it completes the startup. That is, the container tries to get a DHCP lease until it times out, and then cloud-init will eventually setup the nameserver.

config:
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
        eth0:
          dhcp4: false
          dhcp6: false
  user.user-data: |
    #cloud-config
    bootcmd:
      - echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
      - systemctl restart resolvconf
description: ipvlan profile for Debian container images
devices:
  eth0:
    ipv4.address: 192.168.1.201
    name: eth0
    nictype: ipvlan
    parent: enp3s0
    type: nic
name: ipvlan_debian

You can launch such a Debian container with ipvlan using a command line like the following.

lxc launch images:debian/10/cloud mydebian --profile default --profile ipvlan_debian

Note that for Debian 11 (currently not released yet) the above does not work. If you can figure out a way to make it work for Debian 11, please write a comment.

ipvlan LXD profile for Fedora container images

The following is an alternative LXD routed profile that can be used on Fedora. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Fedora, please report it below so that I can update the post. The profile has two sections; the cloud-init section that configures once the networking in the container using NetworkManager, and the LXD network configuration that directs LXD on how to setup the routed networking on the host. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like with other free public DNS servers. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through ipvlan).

Note that you would launch the container with a command line

lxc launch images:fedora/33/cloud myfedora --profile default --profile ipvlan_fedora
config:
  user.user-data: |
    #cloud-config
    bootcmd:
      - nmcli connection modify "System eth0" ipv4.addresses 192.168.1.202/32
      - nmcli connection modify "System eth0" ipv4.gateway 169.254.0.1
      - nmcli connection modify "System eth0" ipv4.dns 8.8.8.8
      - nmcli connection modify "System eth0" ipv4.method manual
      - nmcli connection down "System eth0"
      - nmcli connection up "System eth0"
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.202
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
name: ipvlan_fedora

Using the ipvlan networking in LXD

We create a container called myipvlan using the default profile and on top of that the ipvlan profile.

$ lxc launch ubuntu:20.04 myipvlan --profile default --profile ipvlan
Creating myipvlan
Starting myipvlan
$ lxc list myipvlan 
+----------+---------+----------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+----------+---------+----------------------+-----------+-----------+
| myipvlan | RUNNING | 192.168.1.200 (eth0) | CONTAINER | 0         |
+----------+---------+----------------------+-----------+-----------+
$ 

According to LXD, the container has configured its IP address that was packaged into the cloud-init configuration.

Get a shell into the container and ping

  1. other IP addresses on your LAN
  2. an Internet host such as www.google.com.

Here is a test try using a Fedora container image.

$ lxc launch images:fedora/33/cloud myfedora --profile default --profile ipvlan_fedora
Creating myfedora
Starting myfedora                         
$ lxc list myfedora
+----------+---------+----------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+----------+---------+----------------------+-----------+-----------+
| myfedora | RUNNING | 192.168.1.202 (eth0) | CONTAINER | 0         |
+----------+---------+----------------------+-----------+-----------+
$ lxc shell myfedora
[root@myfedora ~]# ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=111 time=12.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=111 time=12.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=111 time=12.1 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 12.148/110.215/201.306/117.007 ms
[root@myfedora ~]# logout
$ 

Conclusion

We have seen how to setup and use ipvlan in LXD, when launching Ubuntu and Fedora container images (Debian is still pending, if you figure it out, please write a comment).

We show how to use LXD profiles to setup easily the creation of the container, and in the profile we add the IP address of the container. This means that for each container we would need to create individual LXD profiles. Note that a LXD profile is attached to a container, so if you want to change it for another container, the change will apply to any existing container as well (i.e. mess). You also could create the containers without needing an additional LXD profile, by perform lxc config commands on the host, and networking commands inside the container. We do not show that here.

You get a similar result when using ipvlan and routed. I do not go into detail about the practical differences between the two.

Permanent link to this article: https://blog.simos.info/how-to-get-lxd-containers-obtain-ip-from-the-lan-with-ipvlan-networking/

23 comments

1 ping

Skip to comment form

    • Luken on February 27, 2021 at 12:00
    • Reply

    What exactly doesn’t work with Debian? If we knew it would be easier to help.

    1. Thanks for the reply.

      If you run the commands for Debian, you will notice that the networking in the container is not setup correctly. The container does not have network connectivity. This is likely related to the Debian runtime that apparently resets some network configuration that is provided by LXD.

    • Łukasz Zaroda on March 1, 2021 at 20:14
    • Reply

    I tested Debian 10 image and networking works just fine. I used debian/buster/cloud image, and two profiles:

    default:
    
    
    config:
      user.user-data: |
        #cloud-config
        bootcmd:
          - echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
          - systemctl restart resolvconf
    description: Default LXD profile
    devices:
      root:
        path: /
        pool: default
        type: disk
    name: default
    

    debian-test:

    config: {}
    description: 'Debian Test #1 LXD profile'
    devices:
      eth0:
        ipv4.address: 192.168.7.205
        nictype: ipvlan
        parent: eth1
        type: nic
    name: debian-test
    

    I didn’t notice any issues with networking.

    1. Thanks Łukasz!

      I just realized that Debian 11 is still in development and Debian 10 is the latest version.
      I updated the LXD profile for Debian 10 and now it works with ipvlan.
      Compared to your version, I just added the dhcp4: false in the profile so that the container is usable as soon as it is started.
      Because if DHCP is not disabled in the container, the container tries to get a DHCP lease, it takes about a minute to time out.
      And only then it runs the bootcmd instruction to setup the nameserver.

    • Łukasz Zaroda on March 1, 2021 at 20:26
    • Reply

    I tested also Debian 11 image, and it also kind of works. Kind of, because there was one difference: DNS resolution didn’t work. It turns out that /etc/resolv.conf is not being updated with systemctl restart resolvconf for some reason, so I had to manually put nameserver 8.8.8.8 there, and name resolution started working.

    1. I think that was the issue when I was testing with Debian 11.
      The changes in the networking in Debian 11 makes is to that any external configuration of the nameserver
      is reset as soon as the container starts up.
      The requirements are

      1. the container has to be usable as soon as it starts up (has networking setup properly, including the nameserver).
      2. if you restart the container, it should still work.
    • Łukasz Zaroda on March 1, 2021 at 20:33
    • Reply

    Maybe this user.network-config part broke the networking, but it doesn’t seem to be required.

    1. If there is no user.network-config, then it works for Debian 11?
      If you have a working profile for Debian 11, I am happy to add above.

    • Andrew on March 3, 2021 at 15:10
    • Reply

    Hi, please help! I have CentOS 8 container running with ipvlan device config. Everything works fine but there is an issue with set nameservers. Problem is that i can’t ping google.com but i can ping 8.8.8.8
    I try use “fedora” approach but there no luck for me.

    /etc/resolv.conf don’t change

    cat /etc/resolv.conf
    
    Generated by NetworkManager
    
    search lxd
    nameserver 10.96.201.1
    

    Thanks!

    1. Most likely Centos 8 (Stream or non-stream?) has small but significant differences to Fedora.

      What is happening is that NetworkManager in Centos 8 has a final say in the end, and configures a nameserver (overwriting our setting).
      That nameserver is the host which likely does not respond due to ipvlan.
      What you need is to get the container not to set a DNS server and have it set one yourself.

      What’s your container image (images:centos/8/cloud ?)?

    • Andrew on March 4, 2021 at 07:31
    • Reply

    Thanks for your reply. My container image is images:centos/8 not cloud and not stream. I put nameserver 8.8.8.8 to /etc/resolv.conf by hand and with that ping to google.com work fine. However, I don’t think it’s right to manually add nameserver

    1. The images:centos/8 container image does not have support for cloud-init. This means that it is not possible to automate the creation of such a container with ipvlan, using just a LXD profile.

      In that ipvlan LXD profile we have added instructions for both LXD and the container to setup themselves and enable ipvlan networking. The instructions for the container are written in cloud-init instructions (upper part), and the container has to support cloud-init in order to parse them and execute them. The alternative is to type these commands destined for the container as soon as you get a shell into the container, manually.

      Note that in some container images, the default networking subsystem will reset some networking parameters, which will make the container lose the networking configuration that LXD applied in the very beginning. With cloud-init, we can instruct the container image not to do that.

    • Andrew on March 4, 2021 at 09:06
    • Reply

    Thanks a lot for the clarification. In my next container I will try images:centos/8/cloud. For now hardcoded resolv.conf suits me for current needs. Thanks!

    • MCZ on April 13, 2021 at 02:28
    • Reply

    Worked to ping / inet connection from within container, but no ping into the container from within the LAN… any clues?

    1. Hi!

      You should be able to ping from another computer on the LAN to the IP address of the ipvlan LXD container.
      Verify first that you can ping the LXD host from the other computers on the LAN.

      Use tcpdump, first on the host, then in the ipvlan LXD container to follow the ICMP packets.

      I ping-ed from my phone to the ipvlan LXD container. Here is the output of tcpdump on the host:

      ...
      01:00:43.687787 IP 192.168.1.6 > 192.168.1.200: ICMP echo request, id 3, seq 3, length 64
      01:00:43.687862 IP 192.168.1.200 > 192.168.1.6: ICMP echo reply, id 3, seq 3, length 64
      ...
      

      And here is the output of tcpdump in the ipvlan LXD container,

      21:57:10.822322 IP 192.168.1.200 > 192.168.1.6: ICMP echo reply, id 2, seq 1, length 64
      21:57:11.825202 IP 192.168.1.200 > 192.168.1.6: ICMP echo reply, id 2, seq 2, length 64
      21:57:12.828097 IP 192.168.1.200 > 192.168.1.6: ICMP echo reply, id 2, seq 3, length 64
      ...
      
    • Sergey on May 5, 2021 at 11:42
    • Reply

    Thank you very much for such helpful articles!

    I managed to set up an ipvlan network, but one problem remained:
    I cannot access (ssh, ftp, http, ..?) To the container from another computer on the same local network, but ping works.

    It turned out that this issue can be solved by disabling UFW on the host with LXD. However, I would like it to be enabled.

    Is it possible to somehow configure it for the container in the same way as for the host itself?
    For example, I want only port 22 to be accessible on the container from my local network.

    1. Thank you!

      You are describing a common issue with firewalls and their interactions with network services on a host.
      That is, these two are different and independent, and in a way that is the way it should be.

      Potentially, LXD could be able to open some firewall rules when you launch an ipvlan container.
      It is not unprecedented. When you create a proxy device with the nat=true flag, LXD creates firewall rules for you
      for the port forwarding. It is so cool that these rules are attached to the container.
      The container is gone? The rules are gone as well, without further interaction from you.

      You can ask for such a feature at https://github.com/lxc/lxd/issues
      In any case, LXD sets up the IP address for you with the ipvlan,
      and is something that is not completely out of the question.

    • Sergey on May 6, 2021 at 11:35
    • Reply

    What you are describing sounds very interesting and really convenient, but I’m not sure if I understood you correctly.

    Let me ask you a clarifying question: let’s say that my host computer has IP 192.168.1.100 and the container running on it has IP 192.168.1.200 via ipvlan. If I understood correctly (or just heard what I wanted to hear, not the truth), by adding a properly configured proxy device to this container, I can open access to 192.168.1.200:22 (tcp) for computers on the same local network when UFW is enabled is enabled?

    Or is it about proxying some other (free) port, for example 2222 (to port 22 of the container)?

    I did a similar trick earlier, whith the host IP.

    I tried adding proxy devices to proxy container port 22 to host port 22, but got an “address already in use” error.

    I have only a superficial understanding of networking technologies, I came from the side of the problem and try to solve it by poking at things with sticks. 🙂

    1. I mentioned LXD proxy devices and the ability for LXD to manage some iptables rules for your container as a way to show that it might be possible to add some firewall support (iptables) to ipvlan. It was more of a thought experiment rather than a suggestion that you can use proxy devices with ipvlan containers. In fact, you cannot use proxy devices with ipvlan containers, hence the errors that you get.

      If you use ipvlan containers just as a way to make a network service in that container available to the LAN, then you could consider using proxy devices instead (and let the container use lxdbr0 networking).

      See https://blog.simos.info/how-to-use-the-lxd-proxy-device-to-map-ports-between-the-host-and-the-containers/ which explains how to make a Web server running in a standard container (lxdbr0) accessible from the LAN. The downside here is that the network port on the host must be free in order to create a proxy device. That is, if you want to create two web servers with proxy devices, then you need to use two different ports (such as 80 and 81) on the host.

    • Sergey on May 6, 2021 at 17:02
    • Reply

    Ah, now I get it!

    And yes, I already did it earlier as indicated in your article on the link, it helped me a lot earlier, works fine. However, yes, the problem is that one standard port for some service can be used only 1 time for 1 IP 🙁 Not that this is a huge problem, but I would like to make the services available in my containers by their default ports (each service by its own IP).

    Also the limitation is that my host is connected to the local network only via WiFi (which, as far as I know, imposes some restrictions on the options for obtaining unique IP for containers).

    By the way, does routed network have the same behavior as ipvlan with UFW?
    That is, by default, I also cannot implement what I described earlier, without any special settings?
    Unfortunately, I cannot verify this myself yet.

    1. If you can setup local DNS, then you can host all websites on a single server. That is, if your host is 192.168.1.10, and you are able to assign at your LAN’s DNS servers DNS names such as:

      192.168.1.10 lxd web1 web2 web3
      

      then you can use a reverse proxy with lxdbr0 and avoid setting up ipvlan/routed altogether.

      See https://www.linode.com/docs/guides/beginners-guide-to-lxd-reverse-proxy/ on how it can be done. Note that you do not setup Let’s Encrypt, therefore the steps are further simplified.

      Otherwise, you would use routed or ipvlan. These two are different and I think you need to setup rules on different chains.

    • Sergey on May 9, 2021 at 11:12
    • Reply

    Thanks for the suggestion, it solves some of the problems 🙂

    • Thanh Tung on August 17, 2021 at 10:45
    • Reply

    Please update the post with ipv6. Thank you in advance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: