How to add both a private and public network to LXD using cloud-init

When you launch a new LXD container, LXD applies the default LXD profile unless you specify a different profile. By adding configuration to a LXD profile, you can launch containers with specific parameters such as specific network configuration.

In the following we see how this default LXD profile looks like, and then use cloud-init instructions in a new profile to launch a container that has two network interfaces, both getting an IP address from DHCP, one from LXD’s DHCP server (private) and the other from the LAN’s DHCP server (public).

LXD profiles and the default LXD profile

You can view the list of LXD profiles by running the lxc profile listcommand.

$ lxc profile list
+----------------------+---------+
|         NAME         | USED BY |
+----------------------+---------+
| default              | 36      |
+----------------------+---------+

Then, you can view the contents of a LXD profile with lxc profile show. There are two devices that will be made available to a newly created container; a network device eth0which takes an IP address from LXD’s private bridge, and a disk device from a previously configured storage pool that happens to be named default. That is, there is storage for the container, and a network device that the container can try to configure with DHCP.

$ lxc profile show default
 config: {}
 description: Default LXD profile
 devices:
   eth0:
     name: eth0
     nictype: bridged
     parent: lxdbr0
     type: nic
   root:
     path: /
     pool: default
     type: disk
 name: default
 used_by:

For further management of LXD profiles, you can use lxc profile as follows. You can create new profiles, you can copy from an existing profile, you can delete them, you can edit them, you can assign a profile to an existing container and so on.

$ lxc profile
Description:
   Manage profiles

Usage:
   lxc profile [command]

Available Commands:
   add         Add profiles to containers
   assign      Assign sets of profiles to containers
   copy        Copy profiles
   create      Create profiles
   delete      Delete profiles
   device      Manage container devices
   edit        Edit profile configurations as YAML
   get         Get values for profile configuration keys
   list        List profiles
   remove      Remove profiles from containers
   rename      Rename profiles
   set         Set profile configuration keys
   show        Show profile configurations
   unset       Unset profile configuration keys

Global Flags:
       --debug            Show all debug messages
       --force-local      Force using the local unix socket
   -h, --help             Print help
       --project string   Override the source project
   -q, --quiet            Don't show progress information
   -v, --verbose          Show all information messages
       --version          Print version number

Use "lxc profile [command] --help" for more information about a command.

LXD profiles and cloud-init

In the config section of a LXD profile, you can add cloud-init instructions that get passed (currently verbatim) to the container, and if the container has support for cloud-init, it will use them.

We are going to start with the default LXD profile and then add the cloud-init instructions to enable two network interfaces, private and public.

$ lxc profile copy default privatepublicnetwork

Then, edit the new profile by running the following. It opens up a text-mode text editor. If you would like a different text editor, set beforehand the variable EDITOR to something else, such as nano or vi or emacs.

$ lxc profile edit privatepublicnetwork

Adapt the following profile content to your own profile. First, see the section on devices:. eth0 is the private bridge that is managed by LXD. eth1 is a macvlan interface that uses as parent the (in my case) enp16s0 network interface of the host. This means that eth1has access to the LAN, and will work fine if the interface on the host has a Ethernet cable connection to the router. Because if there is WiFi connection it will definitely not work, and if you use VMWare/VirtualBox/Hyper-V as the host, you have to deal with intricacies of each virtualization platform.
Then, the user.network-config under the config: section is the cloud-init instructions that get passed verbatim to the container. We specify version: 1 so it works even on Ubuntu 16.04 container images (they do not support version 2). There are two physical interfaces, eth0 and eth1, both getting their network configuration by asking for it using DHCP.

$ lxc profile show privatepublicnetwork 
config:
  user.network-config: |
    version: 1
    config:
      - type: physical
        name: eth0
        subnets:
          - type: dhcp
            ipv4: true
      - type: physical
        name: eth1
        subnets:
          - type: dhcp
            ipv4: true
description: LXD profile with private and public networks
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    name: eth1
    nictype: macvlan
    parent: enp16s0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: privatepublicnetwork
used_by:

Launching a container with two network interfaces

We launch a container with this privatepublicnetwork profile.

$ lxc launch ubuntu:18.04 mycontainer --profile privatepublicnetwork
Creating mycontainer
Starting mycontainer

Then, get a shell into the container.

$ lxc exec privatepublicnetwork -- sudo --user ubuntu --login

Let’s see what we have in terms of network configuration. eth0 got an IP address from the LXD managed network. eth1 got an IP address from the LAN.

ubuntu@mycontainer:~$ ifconfig 
eth0: flags=4163  mtu 1500
         inet 10.10.10.30  netmask 255.255.255.0  broadcast 10.10.10.255
         inet6 fe80::216:3eff:fe81:bca6  prefixlen 64  scopeid 0x20
         ether 00:16:3e:81:bc:a6  txqueuelen 1000  (Ethernet)
         RX packets 423  bytes 541699 (541.6 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 258  bytes 20599 (20.5 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
         inet 192.168.1.10  netmask 255.255.255.0  broadcast 192.168.1.255
         inet6 fe80::216:3eff:fe7f:c7a7  prefixlen 64  scopeid 0x20
         ether 00:16:3e:7f:c7:a7  txqueuelen 1000  (Ethernet)
         RX packets 14  bytes 1706 (1.7 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 22  bytes 2518 (2.5 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
         inet 127.0.0.1  netmask 255.0.0.0
         inet6 ::1  prefixlen 128  scopeid 0x10
         loop  txqueuelen 1000  (Local Loopback)
         RX packets 19  bytes 1604 (1.6 KB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 19  bytes 1604 (1.6 KB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Here is the routing table.

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         myhost          0.0.0.0         UG    100    0        0 eth0
default         OpenWRT.Home    0.0.0.0         UG    100    0        0 eth1
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
myhost          0.0.0.0         255.255.255.255 UH    100    0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
OpenWRT.Home    0.0.0.0         255.255.255.255 UH    100    0        0 eth1

Conclusion

In this post we saw how to setup cloud-init in a LXD profile in order to setup two network interfaces in a LXD container. You can read the cloud-init documentation in order to find what is possible to perform for the network configuration of LXD containers.

Permanent link to this article: https://blog.simos.info/how-to-add-both-a-private-and-public-network-to-lxd-using-cloud-init/

2 comments

  1. Haven’t as yet managed to get it to work as the yaml editor keeps removing all the extra text required to make it work and I can’t really see where the error is (if any). Even doing a cut and paste job and your config which ought to work but doesn’t (my eth0 is different of course). I am using the snap version of lxd on Debian 10. Interested to know your views on using vagrant to manage lxc containers rather than lxd

    1. You can choose between “nano” and “vim”. I think “vim” needs some help to avoid using tabs instead of spaces.

      If you create your final profile in a text editor of your choosing (for example, gedit), you can also do this:

      lxc profile create myprofile

      cat myprofile.txt | lxc profile edit myprofile

      By piping to “lxc profile edit”, you can push in the text file.

      I tried Vagrant last year in an effort to write a tutorial on using it with LXD. I noticed a few usability issues just for getting Vagrant to work in Ubuntu. Both the Vagrant LXC and LXD plugins are somewhat old. Are they being actively developed?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: