How to get LXD containers get IP from the LAN with routed network

Update #3 – 27 January 2020: Fedora requires special instructions for routed to work. See the section below named The routed profile for Fedora for more.

UPDATE #2 – 26 January 2020: Debian requires special instructions for routed to work. See the section below named The routed profile for Debian for more.

UPDATE #1 – 11 August 2020: Ubuntu 20.04 as a container did not work with the instructions in this post. There is a need for an addition to the profile in order to make it work for both older versions of Ubuntu and newer versions (Ubuntu 18.04 and Ubuntu 20.04). The addition is for on-link: true in the cloud-init section in the LXD profile. This post has been updated to cover the addition of on-link: true.

You are using LXD containers and you want a container (or more) to get an IP address from the LAN (or, get an IP address just like the host does).

LXD currently supports four ways to do that, and depending on your needs, you select the appropriate way.

  1. Using macvlan. See https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
  2. Using bridged. See https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
  3. Using routed. It is this post, read on.
  4. Using ipvlan. This tutorial is pending.

For more on the routed network option, see the LXD documentation on routedand this routed thread on the LXD discussion forum.

Why use the routed network?

You would use the routed network if you want to expose containers to the local network (LAN, or the Internet if you are using an Internet server, and have allocated several public IPs).

Any containers with routed will appear on the network to have the MAC address of the host. Therefore, this will work even when you use it on your laptop that is connected to the network over WiFi (or any router with port security). That is, you can use routed when macvlan and bridged cannot work.

You have to use static network configuration for these containers. Which means,

  1. You need to make sure that the IP address on the network that you give to the routed container, will not be assigned by the router in the future. Otherwise, there will be an IP conflict. You can do so if you go into the configuration of the router, and specify that the IP address is in use.
  2. The container (i.e. the services running in the container) should not be performing changes to the network interface, as it may mess up the setup.

Requirements for Ubuntu containers

The default network configuration in Ubuntu 18.04 or newer is to use netplan and get eth0 to use DHCP for the configuration. The way netplan does this, messes up with routed, so we are using a workaround. This workaround is required only for the Ubuntu container images. Other distributions like CentOS do not require it. The workaround is based on cloud-init, so it is the whole section for cloud-init in the profile below.

The routed LXD profile

Here is the routed profile, which has been tested on Ubuntu. Create a profile with this name. Then, for each container that uses the routed network, we will be creating a new individual profile based on this initial profile. The reason why we create such individual profiles, is that we need to hard-code the IP address in them. Below, in bold, you can see the values that can be changes, specifically, the IP address (in two locations, replace with your own public IP addresses), the parent interface (on the host), and the nameserver IP address (that one is a public DNS server from Google). You can create an empty profile, then edit it and replace the existing content with the following (lxc profile create routed, lxc profile edit routed).

config:
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 192.168.1.200/32
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.200
    nictype: routed
    parent: enp6s0
    type: nic
name: routed_192.168.1.200
used_by:

We are going to make copies of the routed profile to individual new ones, one for each IP address. Therefore, let’s create the LXD profiles for 192.168.1.200 and 192.168.1.201. When you edit them

$ lxc profile copy routed routed_192.168.1.200
$ EDITOR=nano lxc profile edit routed_192.168.1.200
$ lxc profile copy routed routed_192.168.1.201
$ EDITOR=nano lxc profile edit routed_192.168.1.201

We are ready to test the profiles.

(alternative) The routed LXD profile for Debian

The following is an alternative LXD routed profile that can be used on Debian and was created by tomp. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Debian, please report it below so that I can update the post. It explicitly makes the container not to set network configuration through DHCP. It further uses cloud-init instructions to manually create a /etc/resolv.conf because without DHCP there wouldn’t be such a file in the container. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through routed).

config:
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
        eth0:
          dhcp4: false
          dhcp6: false
          routes:
          - to: 0.0.0.0/0
            via: 169.254.0.1
            on-link: true
  user.user-data: |
    #cloud-config
    bootcmd:
      - echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
      - systemctl restart resolvconf
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.201
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
name: routed_debian

(alternative) The routed LXD profile for Fedora

The following is an alternative LXD routed profile that can be used on Fedora and was created by tomp. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Fedora, please report it below so that I can update the post. The profile has two sections; the cloud-init section that configures once the networking in the container using NetworkManager, and the LXD network configuration that directs LXD on how to setup the routed networking on the host. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like with other free public DNS servers. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through routed).

Note that you would launch the container with a command line

lxc launch images:fedora/33/cloud fedora --profile default --profile routed_for_fedora
config:
  user.user-data: |
    #cloud-config
    bootcmd:
      - nmcli connection modify "System eth0" ipv4.addresses 192.168.1.201/32
      - nmcli connection modify "System eth0" ipv4.gateway 169.254.0.1
      - nmcli connection modify "System eth0" ipv4.dns 8.8.8.8
      - nmcli connection modify "System eth0" ipv4.method manual
      - nmcli connection down "System eth0"
      - nmcli connection up "System eth0"
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.201
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
name: routed_for_fedora

Using the routed network in LXD

We create a container called myrouted using the default profile and on top of that the routed_192.168.1.200 profile.

$ lxc launch ubuntu:18.04 myrouted --profile default --profile routed_192.168.1.200
Creating myrouted
Starting myrouted
$ lxc list -c ns4t 
+------+---------+----------------------+-----------+
| NAME |  STATE  |         IPV4         |   TYPE    |
+------+---------+----------------------+-----------+
| myr..| RUNNING | 192.168.1.200 (eth0) | CONTAINER |
+------+---------+----------------------+-----------+
$ 

According to LXD, the container has configured its IP address that was packaged into the cloud-init configuration.

Get a shell into the container and ping

  1. your host
  2. your router
  3. an Internet host such as www.google.com.

All of the above should work. Finally, ping from the host to the IP address of the container. It should work as well.

Conclusion

You have configured routed in LXD so that one or more containers can get IP addresses from the network. Using a profile helps to automate the process. Still, if you want to setup manually, see the references above for instructions.

Permanent link to this article: https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/

53 comments

3 pings

Skip to comment form

    • Brudi on May 9, 2020 at 10:15
    • Reply

    lmao Im pretty sure your blog is far more useful and comprehensive than the LXD documentation itself

    • Aanjaneya on May 25, 2020 at 17:25
    • Reply

    This does not work with Ubuntu 20.04. While it does work with Ubuntu 18.04 but is very picky or sensitive meaning that I do get the required IP but cannot ping. Once I delete the container and launch again with a different routed profile (with different IP). I have not tested further than ping on 18.04.

    • Aanjaneya on May 25, 2020 at 20:37
    • Reply

    By the way I am using archlinux based Manjaro Linux as host and ubuntu 20.04 as lxc container. but then it worked with ubuntu 18.04 even though a bit unpredictable.

    • Aanjaneya on May 25, 2020 at 22:02
    • Reply

    After some digging. I found that in Ubuntu 20.04 something has changed. Here are the results of relevant network commands after creating and launching the 20.04 and 18.04 containers:

    Ubuntu 20.04:


    ip route
    default via 192.168.0.1 dev wlp2s0 proto static metric 600
    10.10.10.0/24 dev lxdbr0 proto kernel scope link src 10.10.10.1 linkdown
    192.168.0.0/24 dev wlp2s0 proto kernel scope link src 192.168.0.2 metric 600
    192.168.0.51 dev vethe1bcc11c scope link
    192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

    ip neigh show proxy
    169.254.0.1 dev vethe1bcc11c proxy
    192.168.0.51 dev wlp2s0 proxy

    lxc exec focalheadless-1 ip r

    No on Ubuntu 18.04


    ip route
    default via 192.168.0.1 dev wlp2s0 proto static metric 600
    10.10.10.0/24 dev lxdbr0 proto kernel scope link src 10.10.10.1 linkdown
    192.168.0.0/24 dev wlp2s0 proto kernel scope link src 192.168.0.2 metric 600
    192.168.0.51 dev veth8b310893 scope link
    192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

    ip neigh show proxy
    169.254.0.1 dev veth8b310893 proxy
    192.168.0.51 dev wlp2s0 proxy

    lxc exec bionic-1 ip r
    default via 169.254.0.1 dev eth0 ----> we receive result here in side the container

    Also you may notice a slight difference in vethe1bcc11c and veth8b310893. Though I am not sure about this.

    • sdfsasfsda on July 2, 2020 at 13:44
    • Reply

    For years I’m been coming to your blog for guidance on this LXD exposing to LAN issue. Thank you so much

    1. Thank you for your kind words!

    • Ponder Muse on August 9, 2020 at 12:55
    • Reply

    I am new to lxc and I am trying to setup a routed profile for an lxc container in order to make the container reachable from the host as well as other servers on the local network over a wifi interface.

    I am following the steps in this page and although the profile created successfully, when I add it to the lxc container and restart the container, no IP address is ever assigned to the container.

    Where would I need to look for logs to get an idea of where the setup is going wrong?

    I am trying this on Ubuntu 20.04 using lxc version 4.0.2.

    Thanks in advance,
    PM

    1. Hi!

      You mention the version 4.0.2. Do you use the lxd snap package and are tracking the 4.0/stable channel?

      If instead you are using LXC (see https://blog.simos.info/comparison-between-lxc-and-lxd/), then this post does not apply there.

      You can show the full LXD profile for the routed network, plus the command you use to launch the LXD container.
      You can view any container errors by running the command lxc info --show-log myrouted.

        • Ponder Muse on August 9, 2020 at 18:12
        • Reply

        Hello Simos,

        Sorry, yes. I am using lxd snap package 4.0/stable.

        Profile for routed network:

        $ lxc profile show testRoutedProfile
        config:
          user.network-config: |
            version: 2
            ethernets:
              eth0:
                addresses:
                - 192.168.1.100/32
                nameservers:
                  addresses:
                  - 8.8.8.8
                    search: []
                routes:
                - to: 0.0.0.0/0
                  via: 192.168.1.1
        description: Test Server Routed Profile
        devices:
          eth0:
            ipv4.address: 192.168.1.100
            nictype: routed
            parent: wlo1
            type: nic
        name: testRoutedProfile
        used_by:
        - /1.0/instances/test-server
        

        Command to add the created routed network profile:

        $ lxc profile add test-server testRoutedProfile
        

        Command to launch the LXD container:

        $ lxc start test-server</code>
        

        Command to show container log:

        $ lxc info --show-log test-server
        Name: test-server
        Location: none
        Remote: unix://
        Architecture: x86_64
        Created: 2020/08/08 18:03 UTC
        Status: Running
        Type: container
        Profiles: default, testRoutedProfile
        Pid: 38634
        Ips:
          lo:   inet    127.0.0.1
          lo:   inet6   ::1
          eth0: inet6   fe80::cc47:37ff:febc:933e   veth67de2a5b
        Resources:
          Processes: 45
          CPU usage:
            CPU usage (in seconds): 4
          Memory usage:
            Memory (current): 135.88MB
            Memory (peak): 167.17MB
          Network usage:
            lo:
              Bytes received: 316B
              Bytes sent: 316B
              Packets received: 4
              Packets sent: 4
            eth0:
              Bytes received: 7.48kB
              Bytes sent: 4.49kB
              Packets received: 47
              Packets sent: 24
        

        Log:

        lxc test-server 20200809165847.736 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.test-server"
        lxc test-server 20200809165847.738 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.test-server"
        lxc test-server 20200809165847.741 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1573 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )

        Command showing container list:

        $ lxc list
        +-------------+---------+------+------+-----------+-----------+
        |    NAME     |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
        +-------------+---------+------+------+-----------+-----------+
        | test-server | RUNNING |      |      | CONTAINER | 0         |
        +-------------+---------+------+------+-----------+-----------+
        

        Command showing profiles used:

        $ lxc profile list
        +-------------------+---------+
        |       NAME        | USED BY |
        +-------------------+---------+
        | default           | 1       |
        +-------------------+---------+
        | testRoutedProfile | 1       |
        +-------------------+---------+
        

        There isn’t much shown in the log in terms of networking errors as far as I can see.

        Thanks in advance,
        PM

      1. The difference that I notice, is that in the profile the via: has a different IP address from the one I give above. The 169.x.x.x that I give in the post, is a workaround around an issue in the configuration of the Linux distribution in the container. Perhaps the 192.168.1.1 is a valid IP address, hence it causes an issue.

        • Ponder Muse on August 11, 2020 at 12:36
        • Reply

        When I first configured the routed profile I wasn’t clear on the meaning of the address 169.254.0.1 so I was incorrectly guessing it maybe needed to be my network’s gateway address. I have since corrected it and now it is set back to via: 169.254.0.1. I am learning about networking in general at the same time as I am learning all about lxc.

      2. I have updated this post with information about the issue with Ubuntu 20.04 LTS in the routed_ container. There is a need for an additional line in the LXD profile to make it work with Ubuntu 20.04 LTS. Requires the addition of the line on-link: true in the LXD profile.

    • bmullan on August 10, 2020 at 00:22
    • Reply

    @simos

    In your routed config there is a line:

    via: 169.254.0.1

    But that IP is not highlighted and nowhere does it say what it is or where its from?

    Thanks for any info

    Brian Mullan

      • bmullan on August 10, 2020 at 01:03
      • Reply

      @simos
      Never mind I searched and found out 169.254.0.1 gets assigned by lxd for the routed inteface

    • bmullan on August 10, 2020 at 02:22
    • Reply

    Simos

    I am on Ubuntu 20.04 system with my Wifi Interface on Host = wlp3s0

    whose IP is 192.168.1.81

    I tried your Profile but changed the Parent to wlp3s0

    = = = = = = =

    config:
    user.network-config: |
    version: 2
    ethernets:
    eth0:
    addresses:
    – 192.168.1.200/32
    nameservers:
    addresses:
    – 8.8.8.8
    search: []
    routes:
    – to: 0.0.0.0/0
    via: 169.254.0.1
    description: Default LXD profile
    devices:
    eth0:
    ipv4.address: 192.168.1.200
    nictype: routed
    parent: wlp3s0
    type: nic
    name: routed
    used_by:

    = = = = = = =

    Created a test Ubuntu 20.04 container naamed “test”

    $ lxc launch ubuntu:18.04 test –profile default –profile routed_200

    $ lxc list
    shows Container test has IP address 192.168.1.200

    From HOST I can PING container’s IP 192.168.1.200

    From Container TEST I can Ping my Host: 192.168.1.81

    From Container TEST I can Ping my Router: 192.168.1.1

    From my Container I CANNOT Ping http://www.google.com or any other Internet site or IP.

    Any ideas?

    Thanks
    Brian Mullan

    1. I also have tested this post on Ubuntu 18.04 with a WiFi interface. And it worked for me.

      Have a look at /etc/resolv.conf in the container. It should mention the stub resolver IP address, 127.0.0.53.

      Then, run systemd-resolve --status and verify there that the actual DNS server is 8.8.8.8. Should look like

      Link 2 (eth0)
            Current Scopes: DNS
             LLMNR setting: yes
      MulticastDNS setting: no
            DNSSEC setting: no
          DNSSEC supported: no
               DNS Servers: 8.8.8.8
      
    2. Apparently, the failure to work was related to newer container images, based on Ubuntu 20.04 LTS. The host being 20.04 LTS does not appear to affect here (correct me if I am wrong).

      I have updated the post per discussion on DLO (discuss.linuxcontainers.org) that Ubuntu 20.04 (in container) requires on-link: true.

    • Ponder Muse on August 10, 2020 at 13:17
    • Reply

    @simos, @bmullan,

    Having re-read this blog and having read bmullan’s comments, I have now been able to get the container networked over wifi using the routed profile. I can list two things that I have changed that made things work but, I don’t know what it is in particular about each one of them that was causing the issues beforehand.

    Issue 1 – My container existed before creating the routed profile so, after creating the routed profile, I was simply stopping the container (lxc stop test-server) adding the routed profile to the container (lxc profile add test-server testRoutedProfile) and thereafter starting the container (lxc start test-server). This was causing the container to never pick up the routed profile’s configuration for some reason so, I then stopped and deleted the container altogether (lxc stop test-server, lxc delete test-server) and I then recreated it using the command included in this blog page (lxc launch ubuntu:20.04 test-server –profile default –profile testRoutedProfile). Using the launch command, the container then did get the IP address assigned but, exec-ing into the container I could still not ping the host from the container. But it was progress at least…

    Issue 2 – as bmullan pointed out, he mentioned something not quite right with his setup using ubuntu 20.04 so I followed the advice given and I recreated the container using ubuntu 18.04 instead (lxc launch ubuntu:18.04 test-server –profile default –profile testRoutedProfile). I can confirm that with ubuntu 18.04 the networking is setup correctly and I can now ping the host, my gateway and the web (e.g. http://www.google.com) from inside the container. Any ideas why ubuntu 18.04 container works with routed profile but not ubuntu 20.04?

    Cheers,
    PM

    1. Regarding the first issue: The configuration in the cloud-init section only runs when the container starts for the first time. Therefore, it is expected not to run when you apply the profile to an existing container. There are some instructions on how to reset a container/VM so that cloud-init will run again on the next restart.

      Regarding the second issue: I launch a routed 20.04 container on Ubuntu 18.04, and got the same issue. There is no default route, hence the issue. The DNS server is configured properly, so if the default route where to get set, the DNS would work fine.
      Still investigating this.

      1. I have updated this post with the fix for Ubuntu 20.04 LTS in the container. You would need to update your routed profile by adding a line shown in the post, then recreate the container.
        If you already have a container and you do not want to recreate it, you can edit /etc/netplan/50-cloud-init.yaml and add there the line. Finally, restart the container.

    • Ponder Muse on August 17, 2020 at 17:34
    • Reply

    Hello @simos,

    updated the profiles to include on-link: true as you suggested and I can confirm that Ubuntu 20.04 containers using routed profiles are now networked as expected.

    Cheers,
    PM

  1. @Ponder Muse: Thanks for verifying!

    • bmullan on August 18, 2020 at 19:36
    • Reply

    @Ponder Muse does your host use wifi or eth ?

    • Sigve Holmen on September 28, 2020 at 12:30
    • Reply

    @simos Please advise how this should be configured for ipv6? I tried the following, but I suspect the routes for ipv6 are not correct:

    network:
        version: 2
        ethernets:
            eth0:
                    addresses: [ 62.92.68.75/29, "2001:4610:004b::75/64"]
                    gateway4: 62.92.68.73
                    gateway6: 2001:4610:004b::1
                    nameservers:
                     addresses: [1.1.1.1, 1.0.0.1, "2606:4700:4700::1111"]
                    routes:
                    -  to: 0.0.0.0/0
                       via: 169.254.0.1
                       on-link: true
                    -  to:  "::0/0"
                       via: "fe80::fc09:6bff:fec1:4120/64"
                       on-link: true
    
    • Ponder Muse on December 3, 2020 at 12:52
    • Reply

    Hi @bmullan, only just reading this now. My LXC containers are all networked over wifi.

    • rfruit on January 7, 2021 at 09:43
    • Reply

    This blog is really helpful! And very well explained!

    I just had one technical question to improve my understanding of how things work: in the documentation (https://linuxcontainers.org/lxd/docs/master/instances#nic-routed) it is mentioned that:

    “It requires the following sysctls to be set:
    If using IPv4 addresses:
    net.ipv4.conf.<parent>.forwarding=1

    But in the blog this is never set and still everything is working perfectly fine.

  2. In Ubuntu and derivatives, forwarding is enabled by default.
    I did not see complaints on this, which probably means that it is a default setting in most mainstream Linux distributions.

    • rpitester on January 14, 2021 at 21:10
    • Reply

    Hello, first of all, great blog I am learning how to use LXD from your posts.
    I am trying to follow your approach and so far it works on my rpi 4 using the wired interface, but it is not working at all over WIFI, I have modified the profile as you pointed using the on-link but does not make any difference, in my case I can’t even ping 8.8.8.8 inside the container, my profile is just like this

    config:
      user.network-config: |
        version: 2
        ethernets:
            eth0:
                addresses:
                - 192.168.1.10/32
                nameservers:
                    addresses:
                    - 8.8.8.8
                    search: []
                routes:
                -   to: 0.0.0.0/0
                    via: 169.254.0.1
                    on-link: true
    description: Default LXD profile
    devices:
      eth0:
        ipv4.address: 192.168.1.10
        nictype: routed
        parent: wlan0
        type: nic
    name: routed_10
    used_by:
    

    Also checked the post on the linux container forum where they ask if there is any firewall running, not in my case, any idea what could be wrong?

  3. I tried with routed network it worked but was not able to internet from container (like ping google.com) but other hosts are reachable from the container

    My host machine is not a VM, it is a bare metal box and it is running with ubuntu 20.04

    ~ lxc list
    +------+---------+----------------------+------+-----------+-----------+
    | NAME |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
    +------+---------+----------------------+------+-----------+-----------+
    | u1   | RUNNING | 192.168.1.200 (eth0) |      | CONTAINER | 0         |
    +------+---------+----------------------+------+-----------+-----------+
    
    ~ lxc exec u1 bash
    root@u1:~# ping 192.168.1.36 
    PING 192.168.1.36 (192.168.1.36) 56(84) bytes of data.
    64 bytes from 192.168.1.36: icmp_seq=1 ttl=64 time=0.116 ms
    64 bytes from 192.168.1.36: icmp_seq=2 ttl=64 time=0.069 ms
    --- 192.168.1.36 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1027ms
    rtt min/avg/max/mdev = 0.069/0.092/0.116/0.023 ms
    
    root@u1:~# ping google.com
    ping: google.com: Temporary failure in name resolution
    

    My profile config

    lxc profile show routed
    config:
      user.network-config: |
        version: 2
        ethernets:
            eth0:
                addresses:
                - 192.168.1.200/24
                nameservers:
                    addresses:
                    - 8.8.8.8
                    search: []
                routes:
                -   to: 0.0.0.0/0
                    via: 169.254.0.1
                    on-link: true
    description: Default LXD profile
    devices:
      eth0:
        ipv4.address: 192.168.1.200
        nictype: routed
        parent: wlp3s0
        type: nic
    name: routed
    used_by:
    - /1.0/instances/u1
    
    lxc exec u1 bash
    root@u1:~# ip link
    1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    2: eth0@if28: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:0e:27:a5:33:d9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    root@u1:~#
    
    root@u1:~# ping 1.1.1.1
    PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=1001 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=102 ms
    64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=23.5 ms
    64 bytes from 1.1.1.1: icmp_seq=4 ttl=56 time=46.3 ms
    64 bytes from 1.1.1.1: icmp_seq=5 ttl=56 time=70.8 ms
    
  4. This is an issue with the name resolutions (DNS configuration) in the container.

    Can you verify that you have tried the above with the container images ubuntu:18.04 or ubuntu:20.04?
    There are more container images in the images: repository, including Ubuntu images. If you have selected a Debian or Fedora container image, you would need to look above in this post the separate LXD profiles for them.

    You can ping the Internet (the IP address for 1.1.1.1) but not hostnames (DNS issues). The cloud-config section above has a setting to enable DNS for you, and use one of the public DNS servers (8.8.8.8 is provided by Google).

    First, can you run the following command in the container? It will try to make a name resolution using specifically the Google DNS server. If that works, then DNS resolutions are not blocked somehow. You should get the IP address of http://www.google.com in the output, both IPv4 and IPv6.

    host www.google.com 8.8.8.8
    

    Second, if the above works, verify that the cloud-init information managed to get parsed correctly, and can be found in the container.

    $ cat /etc/netplan/50-cloud-init.yaml 
    # This file is generated from information provided by the datasource.  Changes
    # to it will not persist across an instance reboot.  To disable cloud-init's
    # network configuration capabilities, write a file
    # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    # network: {config: disabled}
    network:
        ethernets:
            eth0:
                addresses:
                - 192.168.1.200/32
                nameservers:
                    addresses:
                    - 8.8.8.8
                    search: []
                routes:
                -   on-link: true
                    to: 0.0.0.0/0
                    via: 169.254.0.1
        version: 2
    $ 
    

    Third, verify that the network manager (in the case of Ubuntu it is NetworkManager) is aware of the DNS server. The last lines mention the public Google DNS server, 8.8.8.8.

    $ systemd-resolve --status
    ...
    Link 2 (eth0)
          Current Scopes: DNS    
    DefaultRoute setting: yes    
           LLMNR setting: yes    
    MulticastDNS setting: no     
      DNSOverTLS setting: no     
          DNSSEC setting: no     
        DNSSEC supported: no     
      Current DNS Server: 8.8.8.8
             DNS Servers: 8.8.8.8
    $ 
    
  5. I am using ubuntu:20.04 container, Let me try other steps explained above, tx Simos once again.

  6. Tried these steps

    root@u1:~# host www.google.com 8.8.8.8
    ;; connection timed out; no servers could be reached
    
    root@u1:~# cat /etc/netplan/50-cloud-init.yaml
    
    This file is generated from information provided by the datasource.  Changes
    
    to it will not persist across an instance reboot.  To disable cloud-init's
    
    network configuration capabilities, write a file
    
    /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
    
    network: {config: disabled}
    
    network:
        ethernets:
            eth0:
                addresses:
                - 192.168.1.200/32
                nameservers:
                    addresses:
                    - 8.8.8.8
                    search: []
                routes:
                -   on-link: true
                    to: 0.0.0.0/0
                    via: 169.254.0.1
        version: 2
    
    root@u1:~# systemd-resolve --status
    ...
    Link 2 (eth0)
             Current Scopes: DNS
      DefaultRoute setting: yes
               LLMNR setting: yes
      MulticastDNS setting: no
      DNSOverTLS setting: no
             DNSSEC setting: no
        DNSSEC supported: no
       Current DNS Server: 8.8.8.8
                  DNS Servers: 8.8.8.8
    
  7. You get the following, which does not make sense. But earlier, you could ping 1.1.1.1.

    $ host www.google.com 8.8.8.8
    ;; connection timed out; no servers could be reached
    

    Can you try to ping to 8.8.8.8?

    $ ping 8.8.8.8
    

    If you can ping to 8.8.8.8 but you cannot make name resolutions, then there is some weird filtering between you and Google’s public DNS server. You can try with another, like 1.1.1.1. You need to be able to make name resolutions as above with one of those name servers. If your ISP is being weird and only allows to make name resolutions with their own name server, then find which one it is and use it instead.

  8. root@u1:~# ping 1.1.1.1
    PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
    64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=880 ms
    64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=85.3 ms
    64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=28.6 ms

    root@u1:~# ping 8.8.8.8
    PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
    64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=44.6 ms
    64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=66.7 ms
    64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=88.7 ms

    1. Try then to run host on 1.1.1.1 as well. As in host www.google.com 1.1.1.1.

      If you still get an error that the name server does not answer name resolutions, then you can use tshark or tcpdump to figure out whether you get any ICMP replies at all. You can run tshark or tcpdump in the container and also on the host.

      Also, on the host, try with host www.google.com 8.8.8.8. If you still get an error that no servers could be reached, then your ISP is blocking name resolutions.

    • 512YiB on April 4, 2021 at 00:38
    • Reply

    Not worked for me. Container can’t be started at all with such profile:

    Name: net-test-01
    Location: none
    Remote: unix://
    Architecture: x86_64
    Created: 2021/04/03 21:41 UTC
    Status: Stopped
    Type: container
    Profiles: default, net-01-ramesses

    Log:

    lxc net-test-01 20210403232713.311 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.net-test-01"
    lxc net-test-01 20210403232713.312 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.net-test-01"
    lxc net-test-01 20210403232713.317 ERROR network - network.c:lxc_setup_l2proxy:2924 - File exists - Failed to add ipv4 dest "192.168.1.200" for network device "lo"
    lxc net-test-01 20210403232713.317 ERROR network - network.c:lxc_create_network_priv:3064 - File exists - Failed to setup l2proxy
    lxc net-test-01 20210403232713.317 ERROR start - start.c:lxc_spawn:1786 - Failed to create the network
    lxc net-test-01 20210403232713.317 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:860 - Received container state "ABORTING" instead of "RUNNING"
    lxc net-test-01 20210403232713.318 ERROR start - start.c:__lxc_start:1999 - Failed to spawn container "net-test-01"
    lxc net-test-01 20210403232713.318 WARN start - start.c:lxc_abort:1013 - No such process - Failed to send SIGKILL via pidfd 31 for process 3348791
    lxc 20210403232713.795 WARN commands - commands.c:lxc_cmd_rsp_recv:126 - Connection reset by peer - Failed to receive response for command "get_state"

    1. ERROR network – network.c:lxc_setup_l2proxy:2924 – File exists – Failed to add ipv4 dest “192.168.1.200” for network device “lo”

      I got this error as well on one of my computers. It talks about the loopback interface (lo) and giving it the IP address 192.168.1.200. This is weird.
      In my case I had a bridged interface on the host and was trying to use routed on this bridged interface. You need to check your host’s network configuration.

    • 512YiB on April 4, 2021 at 00:58
    • Reply

    Tried Debian and Fedora. They also didn’t work.
    unlike Ubuntu 20.04, Debian 10 and Fedora 33 at least booted, but got “network unreachable”.

    • Sergey on May 4, 2021 at 01:27
    • Reply
    config:
      user.network-config: |
        version: 2
        ethernets:
            eth0:
                addresses:
                - 192.168.100.200/32
                nameservers:
                    addresses:
                    - 8.8.8.8
                    search: []
                routes:
                -   to: 0.0.0.0/0
                    via: 169.254.0.1
                    on-link: true
    description: Default LXD profile
    devices:
      eth0:
        ipv4.address: 192.168.100.200
        nictype: routed
        parent: wlan0
        type: nic
    name: routed_192.168.100.200
    used_by:
    
    
    lxc launch ubuntu:20.04 myrouted --profile default --profile routed_192.168.100.200
    Creating myrouted
    Starting myrouted
    Error: Error setting up reverse path filter: Failed adding reverse path filter rules for instance device "myrouted.eth0" (inet): Failed apply nftables config: Failed to run: nft 
    table inet lxd {
    chain prert.myrouted.eth0 {
        type filter hook prerouting priority -300; policy accept;
        iif "vethab605f49" fib saddr . iif oif missing drop
    }
    }
    
    Error: Could not process rule: No such file or directory
    
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    Try <code>lxc info --show-log local:myrouted</code> for more info
    
    lxc info --show-log local:myrouted
    Name: myrouted
    Location: none
    Remote: unix://
    Architecture: aarch64
    Created: 2021/05/03 21:36 UTC
    Status: Stopped
    Type: container
    Profiles: default, routed_192.168.100.200
    
    Log:
    
    lxc myrouted 20210503213622.158 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.myrouted"
    lxc myrouted 20210503213622.167 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.myrouted"
    lxc myrouted 20210503213622.182 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1550 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
    lxc myrouted 20210503213623.711 ERROR    network - network.c:lxc_ovs_delete_port:2353 - Failed to delete "vethab605f49" from openvswitch bridge "wlan0": ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory
    lxc myrouted 20210503213623.711 WARN     network - network.c:lxc_delete_network_priv:3236 - Failed to remove port "vethab605f49" from openvswitch bridge "wlan0"
    

    Can you help me with this problem? Your ipvlan tutorial works fine, but I have this problem with routed network 🙁

    1. lxc myrouted 20210503213623.711 ERROR    network - network.c:lxc_ovs_delete_port:2353 - Failed to delete "vethab605f49" from openvswitch bridge "wlan0": ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory
      lxc myrouted 20210503213623.711 WARN     network - network.c:lxc_delete_network_priv:3236 - Failed to remove port "vethab605f49" from openvswitch bridge "wlan0"
      

      It talks here about an openvswitch bridge wlan0. Do you use openvswitch at all? (if not, then it’s a bug in LXD).

    • Sergey on May 4, 2021 at 09:28
    • Reply

    Truth be told, this is my first time hearing about openvswitch. If I use it, I don’t know about it. At least:

    $ ovs-vswitchd --version
    Command 'ovs-vswitchd' not found, but can be installed with:
    sudo apt install openvswitch-switch       # version 2.15.0-0ubuntu3, or
    sudo apt install openvswitch-switch-dpdk  # version 2.15.0-0ubuntu3
    
    1. First of all, I see that you are running on the aarch64 architecture and the host’s main interface is wlan0. Being wlan0 (and not some name like wlx00112233..) it means that you do not have Ubuntu. Is that the case?

      I mention above about Ubuntu as a way to deduce whether you have installed the snap package of LXD or not.

      Also, the IP address you try to set, 192.168.100.200, implies that wlan0 has an IP address of the sort 192.168.100.x. Is that the case? If not, it would complain, but the error would not talk about openvswitch.

    • Sergey on May 4, 2021 at 12:49
    • Reply

    For some reason I can’t see my comment, so I’ll post it again. If this turns out to be a duplicate, you can delete it.

    Comment:

    Oh, sorry, the main interface is actually wlxXXXXXXXXXXXX (I copied the logs to a shared file, where I accidentally replaced it with wlan0).
    The profile I provided above actually contains the wlx… interface, not wlan0.
    However, with regard to IP, everything is correct: in my local network, devices have addresses starting with 192.168.100.xxx
    DHCP on my router works in the range 192.168.100.20 – 192.168.100.199 .

    However, if it matters, my system also has a wlan0 interface, but it is disabled (state DOWN). wlan0 is a WiFi module built into the device.
    The wlx… interface on my system is an external USB device.

    The host device itself is a Raspberry Pi 4 Model B (8 GB).
    The operating system installed on it is Ubuntu 21.04 Server (64 bit).

    $ snap list
    Name    Version   Rev    Tracking         Publisher   Notes
    core18  20210309  2002   latest/stable    canonical   base
    lxd     4.13      20227  latest/stable/   canonical   -
    snapd   2.49.2    11584  latest/stable    canonical   snapd
    
    1. Indeed there was a comment that was held back by the filter of WordPress.

      Since you are running the snap package of lxd, then those openvswitch programs come from within the LXD package.
      That is, even if you have not installed any openvswitch programs yourself, the LXD package has them available for the LXD server,
      if you happen to make a LXD configuration with openvswitch.

      When you run snap info lxd, you will get the following. These are the publicly exposed commands for LXD. As you can see, these are LXD commands.

      ...
      commands:
      - lxd.benchmark
      - lxd.buginfo
      - lxd.check-kernel
      - lxd.lxc
      - lxd.lxc-to-lxd
      - lxd
      - lxd.migrate
      services:
      lxd.activate: oneshot, enabled, inactive
      lxd.daemon: simple, enabled, active
      ...

      The same executables can be found /snap/bin/.

      But if you go deeper into the bin directory of LXD, you will see many more programs, /snap/lxd/current/bin/.
      Anything you see there, can be used by the LXD programs.

      How can we get a view of what the LXD programs can see? With snap run. Use the following. You get a shell into the innards of LXD, and can look around.

      $ snap run --shell lxd
      bash-4.4$ ovsdb-client list-dbs
      ovsdb-client: failed to connect to "unix:/var/run/openvswitch/db.sock" (No such file or directory)

      This means that there is no configuration for openvswitch. You have not been using openvswitch, and your current task does not require openvswitch.
      Therefore, the message about openvswitch missing, is weird, and most likely some bug.

      You can report this issue at https://github.com/lxc/lxd/issues
      Mention that you are trying to use routed networking, and it fails with an error message about openvswitch.
      Provide the exact command outputs, and remove any privacy-related information.

    • Sergey on May 5, 2021 at 11:26
    • Reply

    Thank you for such a detailed explanation! Everything (about snap/LXD) is exactly as you said.

    Regarding the problem itself: it turned out to be a Raspberry Pi kernel config problem.

    https://github.com/lxc/lxd/issues/8735#issuecomment-831968950

    1. Thanks for writing the bug report and getting this solved.

    • Alexei on May 28, 2021 at 07:38
    • Reply

    Hi, thank you for the wonderful guides. I am having a hard time figuring out what option to use when configuring the networking for my lxd container.

    My host machine has 2 physical network adapters: an ethernet and separate wireless one. My isp also provides two separate public IP addresses. My host is already set up in such a way that if I connect with the eth connection I get one IP, and if I connect with the wifi I get the other one.

    I have an lxd container already set up and working perfectly with macvlan using your guide and the ethernet adapter.

    I want a second container that uses the wifi adapter to get the second ip address. I tried using the bridge networking but it seems that I can’t because “Device does not allow enslaving to a bridge.”

    Is routed the way to go? What about the “physical” nic type?

    Thanks!

    1. Thanks!

      From what I understand, your Internet connection is not one of ADSL/VDSL/FTTH which would require a router but something else. And that something else is likely your ISP providing you with an Ethernet port at your location, and having a WiFi near you with some authentication method. Is that the case? I am asking this because in most cases your ISP would give you a router with NAT networking (local network), and all your home devices would get a private IP address of the sort 192.168.x.y or 10.x.y.z.

      When you use macvlan, the upstream router sees two (or more) MAC addresses. If you configured the macvlan container to automatically get an IP address, then the network configuration for the container came from the ISP.

      The importance here with using bridge, macvlan, ipvlan, or routed is that you have access to additional IP addresses from the network. If you have an appropriate router (such as those that can use OpenWRT), then you can set it up to act as a Wireress router; the router itself would be a client to the ISP’s WiFi network and then would setup its own WiFi network for any clients.

      Having said that, you can deal with the WiFi connection separately from the wired Internet setup.

      You can use nic=physical so that the WiFi adapter will vanish from the host and appear in a specific container of your choosing. Only one container can use the WiFi adapter in such a setup.
      See more at https://linuxcontainers.org/lxd/docs/master/instances#nic-physical
      See an example of nic=physical at https://blog.simos.info/using-the-lxd-kali-container-image/

      The reason why you cannot use bridged or macvlan on a WiFi interface is because WPA/WPA2 (i.e. any WiFi security) enforce a security feature that only a single MAC address can appear in such a connection. But with macvlan or bridged there would be out there the MAC address of the host and the container.

    • Alexei on June 27, 2021 at 21:14
    • Reply

    Thank you for the in depth reply Simos! My apologies for not seeing it earlier.

    My setup is a bit weird, so I’ll try to make it more clear for those who might stumble upon this post later. My goal was to use the 2 public IP addresses provided by my ISP. I wanted two containers, each using a separate IP address.

    I have a fibre-to-the-home connections. The optical fibre cable comes into my house and plugs into the Optical Network Terminal (ONT). This device has 4 ethernet ports. Usually the ISP provided router is plugged into the first port, and then I had my own router plugged into the ISP router, and then all my devices connect to that personal router via ethernet or wifi.

    To get the second public IP address, I had to get a switch and plug this switch into the ONT. Then I plug the ISP router and my own router into two ports of the switch. This gives my personal router and the ISP router two separate IP addresses.

    Ideally my computer would have two network cards and I’d connect each card to one router to get the containers on my machine access to the two separate IP addresses. Unfortunately I couldn’t do this because my computer is in a different room and I only have one cable leading to it.

    I figured an alternate solution would be to use my motherboard’s onboard wifi to connect to the ISP router and the motherboard’s ethernet port to connect to my own router.

    That’s what I was having trouble with: I think in the end I set up my second container with the bridged mode for the wifi. I think that I could of also used the physical type in this particular configuration.

    Anyway, thank you again for the wonderful posts!

    • admincmpeng on October 15, 2021 at 01:01
    • Reply

    I have set up things on a Ubuntu 20.04 host per the above, and can ping the LXD container by IP but not by name. Likewise the containers can ping the host by IP but not by name. Ping to external sites by name works for both containers and the host. Is this expected behaviour?

    • Sergey on November 28, 2021 at 10:42
    • Reply

    What if my Ethernet device is named “eth0” (not enp6s0, for example)?
    What should I change in my LXD profile?

    I tried to change only “parent: enp6s0” to “parent: eth0” (in the suggested profile for Debian from the post) but something strange has happened: after the first ping test of the container from the host, the container lost its IPv4 and I was not able to ping it again.

    My host has “Armbian 21.08.5 Focal” and my container has “Ubuntu 20.04”.

    • jarod on January 21, 2022 at 19:46
    • Reply

    Impressive tutorials.
    I have followed this tutorial in a ubuntu 18.04 server VM (in virtualbox windows), I could create debian and ubuntu containers easily using the profiles in this tutorial.
    I recently have switched to kubuntu on my laptop. I have installed lxc 4.22.
    I couldn’t get dns working in debian container. I can ping ip address but can’t ping domain names. So can’t update or install anything.
    I’m connected using wireless (wlp2s0) with and adsl router (192.168.1.1) so 192.168.1.xxx are availables, I could use 192.168.1.200 to 210 while using it in VM.
    Thanks a lot

  9. This is and your similar post on LXD MACVLAN are proving to be amazingly helpful to me. I need to assign static internet IP’s to a handful of containers. For context, my Ubuntu server with LXD is running on a bare-metal in a datacenter. It seems like both MACVLAN and Routed could meet my needs, however I’d like to go for the simplest to manage and troubleshoot. There will be no wifi situation so virtual NIC’s with different MAC’s is not a concern. I’m imagining MACVLAN and Routed have similar overheads and latency, but if you think there is a difference there, please let me know.

  1. […] on a secured wireless network. For a tutorial on setting this up using static IP addresses, go here. It’s the only alternative that […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.