How to add both a private and public network to LXD using cloud-init

When you launch a new LXD container, LXD applies the default LXD profile unless you specify a different profile. By adding configuration to a LXD profile, you can launch containers with specific parameters such as specific network configuration.

In the following we see how this default LXD profile looks like, and then use cloud-init instructions in a new profile to launch a container that has two network interfaces, both getting an IP address from DHCP, one from LXD’s DHCP server (private) and the other from the LAN’s DHCP server (public).

LXD profiles and the default LXD profile

You can view the list of LXD profiles by running the lxc profile listcommand.

$ lxc profile list
+----------------------+---------+
|         NAME         | USED BY |
+----------------------+---------+
| default              | 36      |
+----------------------+---------+

Then, you can view the contents of a LXD profile with lxc profile show. There are two devices that will be made available to a newly created container; a network device eth0which takes an IP address from LXD’s private bridge, and a disk device from a previously configured storage pool that happens to be named default. That is, there is storage for the container, and a network device that the container can try to configure with DHCP.

$ lxc profile show default
 config: {}
 description: Default LXD profile
 devices:
   eth0:
     name: eth0
     nictype: bridged
     parent: lxdbr0
     type: nic
   root:
     path: /
     pool: default
     type: disk
 name: default
 used_by:

For further management of LXD profiles, you can use lxc profile as follows. You can create new profiles, you can copy from an existing profile, you can delete them, you can edit them, you can assign a profile to an existing container and so on.

$ lxc profile
Description:
   Manage profiles

Usage:
   lxc profile [command]

Available Commands:
   add         Add profiles to containers
   assign      Assign sets of profiles to containers
   copy        Copy profiles
   create      Create profiles
   delete      Delete profiles
   device      Manage container devices
   edit        Edit profile configurations as YAML
   get         Get values for profile configuration keys
   list        List profiles
   remove      Remove profiles from containers
   rename      Rename profiles
   set         Set profile configuration keys
   show        Show profile configurations
   unset       Unset profile configuration keys

Global Flags:
       --debug            Show all debug messages
       --force-local      Force using the local unix socket
   -h, --help             Print help
       --project string   Override the source project
   -q, --quiet            Don't show progress information
   -v, --verbose          Show all information messages
       --version          Print version number

Use "lxc profile [command] --help" for more information about a command.

LXD profiles and cloud-init

In the config section of a LXD profile, you can add cloud-init instructions that get passed (currently verbatim) to the container, and if the container has support for cloud-init, it will use them.

We are going to start with the default LXD profile and then add the cloud-init instructions to enable two network interfaces, private and public.

$ lxc profile copy default privatepublicnetwork

Then, edit the new profile by running the following. It opens up a text-mode text editor. If you would like a different text editor, set beforehand the variable EDITOR to something else, such as nano or vi or emacs.

$ lxc profile edit privatepublicnetwork

Adapt the following profile content to your own profile. First, see the section on devices:. eth0 is the private bridge that is managed by LXD. eth1 is a macvlan interface that uses as parent the (in my case) enp16s0 network interface of the host. This means that eth1has access to the LAN, and will work fine if the interface on the host has a Ethernet cable connection to the router. Because if there is WiFi connection it will definitely not work, and if you use VMWare/VirtualBox/Hyper-V as the host, you have to deal with intricacies of each virtualization platform.
Then, the user.network-config under the config: section is the cloud-init instructions that get passed verbatim to the container. We specify version: 1 so it works even on Ubuntu 16.04 container images (they do not support version 2). There are two physical interfaces, eth0 and eth1, both getting their network configuration by asking for it using DHCP.

$ lxc profile show privatepublicnetwork 
config:
  user.network-config: |
    version: 1
    config:
      - type: physical
        name: eth0
        subnets:
          - type: dhcp
            ipv4: true
      - type: physical
        name: eth1
        subnets:
          - type: dhcp
            ipv4: true
description: LXD profile with private and public networks
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    name: eth1
    nictype: macvlan
    parent: enp16s0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: privatepublicnetwork
used_by:

Launching a container with two network interfaces

We launch a container with this privatepublicnetwork profile.

$ lxc launch ubuntu:18.04 mycontainer --profile privatepublicnetwork
Creating mycontainer
Starting mycontainer

Then, get a shell into the container.

$ lxc exec mycontainer -- sudo --user ubuntu --login

Let’s see what we have in terms of network configuration. eth0 got an IP address from the LXD managed network. eth1 got an IP address from the LAN.

ubuntu@mycontainer:~$ ifconfig 
eth0: flags=4163  mtu 1500
         inet 10.10.10.30  netmask 255.255.255.0  broadcast 10.10.10.255
         inet6 fe80::216:3eff:fe81:bca6  prefixlen 64  scopeid 0x20
         ether 00:16:3e:81:bc:a6  txqueuelen 1000  (Ethernet)
         RX packets 423  bytes 541699 (541.6 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 258  bytes 20599 (20.5 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
         inet 192.168.1.10  netmask 255.255.255.0  broadcast 192.168.1.255
         inet6 fe80::216:3eff:fe7f:c7a7  prefixlen 64  scopeid 0x20
         ether 00:16:3e:7f:c7:a7  txqueuelen 1000  (Ethernet)
         RX packets 14  bytes 1706 (1.7 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 22  bytes 2518 (2.5 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
         inet 127.0.0.1  netmask 255.0.0.0
         inet6 ::1  prefixlen 128  scopeid 0x10
         loop  txqueuelen 1000  (Local Loopback)
         RX packets 19  bytes 1604 (1.6 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 19  bytes 1604 (1.6 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Here is the routing table.

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         myhost          0.0.0.0         UG    100    0        0 eth0
default         OpenWRT.Home    0.0.0.0         UG    100    0        0 eth1
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
myhost          0.0.0.0         255.255.255.255 UH    100    0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
OpenWRT.Home    0.0.0.0         255.255.255.255 UH    100    0        0 eth1

Conclusion

In this post we saw how to setup cloud-init in a LXD profile in order to setup two network interfaces in a LXD container. You can read the cloud-init documentation in order to find what is possible to perform for the network configuration of LXD containers.

Permanent link to this article: https://blog.simos.info/how-to-add-both-a-private-and-public-network-to-lxd-using-cloud-init/

11 comments

Skip to comment form

  1. Haven’t as yet managed to get it to work as the yaml editor keeps removing all the extra text required to make it work and I can’t really see where the error is (if any). Even doing a cut and paste job and your config which ought to work but doesn’t (my eth0 is different of course). I am using the snap version of lxd on Debian 10. Interested to know your views on using vagrant to manage lxc containers rather than lxd

    1. You can choose between “nano” and “vim”. I think “vim” needs some help to avoid using tabs instead of spaces.

      If you create your final profile in a text editor of your choosing (for example, gedit), you can also do this:

      lxc profile create myprofile

      cat myprofile.txt | lxc profile edit myprofile

      By piping to “lxc profile edit”, you can push in the text file.

      I tried Vagrant last year in an effort to write a tutorial on using it with LXD. I noticed a few usability issues just for getting Vagrant to work in Ubuntu. Both the Vagrant LXC and LXD plugins are somewhat old. Are they being actively developed?

        • said on January 30, 2020 at 15:12
        • Reply

        Hey Simos,
        First of all, great blog thanks!

        Have you tried to assign eth0 a static ipv4 using this procedure? I have but it doesnt work:

        config:
        user.network-config: |
        version: 1 # as explained by Simos, for backward compatibility
        config:
        – type: physical
        name: eth0
        subnets:
        – type: static
        ipv4: true
        – type: static
        address: 192.168.123.10
        netmask: 255.255.255.0
        gateway: 192.168.123.1
        control: auto # not sure what this does.
        – type: nameserver
        address: 192.168.123.2

        Thanks

      1. Hi!

        I have tried setting a static IP address as you describe and the interfaces got their static IP addresses correctly.

        You mention that it did not work you. Didn’t your interface get their given IP address or was the container not able to reach the Internet? To reach the Internet, you need to add a route as well.

    • bhsi on April 19, 2020 at 16:39
    • Reply

    how can I apply the privatepublicprofile to an existing container.
    I tried with assign but then it creates the interface but dhcp doesn’t assign ip

    1. Hi!

      It worked for me. See the following actual example. Note that we use cloud-init to instruct the container as enable DHCP on both interfaces. By default, Ubuntu container images are setup to enable DHCP for eth0. cloud-init runs by default on the first boot of a computer/container. It is a bit messy to get it to run a second time, therefore, we edit the netplan configuration to let the container to use DHCP on eth1 as well. You may want to look online at instructions on how to get cloud-init to run for a second time.

      $ lxc launch ubuntu: mycontainer
      Creating mycontainer
      Starting mycontainer
      $ lxc profile assign mycontainer privatepublicnetwork 
      Profiles privatepublicnetwork applied to mycontainer
      $ lxc ubuntu mycontainer
      ubuntu@mycontainer:~$ sudo vi /etc/netplan/50-cloud-init.yaml 
      ubuntu@mycontainer:~$ logout
      $ lxc restart mycontainer
      $ lxc ubuntu mycontainer
      ubuntu@mycontainer:~$ ifconfig 
      ...
      

      Here is how the new netplan should look like,

      ubuntu@mycontainer:~$ cat /etc/netplan/50-cloud-init.yaml 
      # This file is generated from information provided by the datasource.  Changes
      # to it will not persist across an instance reboot.  To disable cloud-init's
      # network configuration capabilities, write a file
      # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
      # network: {config: disabled}
      network:
          version: 2
          ethernets:
              eth0:
                  dhcp4: true
              eth1:
                  dhcp4: true
      ubuntu@mycontainer:~$ 
      
        • bhsi on May 7, 2020 at 19:06
        • Reply

        thanks it works nicely 🙂
        by the way small correction – the lxc exec command has profilename instead of container name…

      1. Thanks!

        If you are referring to the lxc ubuntu mycontainer, it is a reference to the aliases listed at https://blog.simos.info/using-command-aliases-in-lxd-to-exec-a-shell/

        • bhsi on May 11, 2020 at 14:43
        • Reply

        i am referring to when you get shell ubder the heading Launching a container with two network interfaces in above post

        Then, get a shell into the container.

        $ lxc exec privatepublicnetwork — sudo –user ubuntu –login
        here i think it should be container name(mycontainer) instead of profilename(privatepublicnetwork)

      2. You are absolutely right. I corrected the mistake. Thanks for the catch.

    • Guillaume Yziquel on February 21, 2024 at 00:41
    • Reply

    I tried to follow that tutorial with incus instead of lxd.

    The /var/lib/cloud/instance/cloud-config.txt has the two interfaces eth0 and eth1 declared as ipv4 dhcp. However, the /etc/netplan/50-cloud-init.yaml only has enp5s0 declared as dhcp. Only one address. And the version is 1 in cloud-config.txt and 2 in netplan.

    So I’m likely not understanding something correctly, but it seems to me that the cloud-init to netplan conversion is not going ok.

    I’m using images:ubuntu/23.10/cloud.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.