Configuring LXD on an AMD EPYC server at

This is the third post in the series.

  1. A closer look at AMD EPYC baremetal servers at
  2. Booting up the AMD EPYC baremetal server at
  3. Configuring LXD on the AMD EPYC baremetal server at (this post)
  4. Benchmarking LXD on an AMD EPYC server at

We have booted the AMD EPYC server and we are about to get a shell through SSH. From the management page, we get the IP address and SSH command. There is no non-root account that was auto-generated; we could have to use cloud-init for that.

Clicking on “SSH ACCESS” (not shown because it’s behind the popup) we get the SSH command.
$ ssh root@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:a5fGzd19whai5fMew6plOEKkMaae8WN6QHFJpJs3TZM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-20-generic x86_64)

* Documentation:
* Management:
* Support:

Last login: Tue Oct 2 10:10:10 2018

We update the package list and have a look at the available updates.

root@myserver:~# apt update
Get:1 bionic-security InRelease [83.2 kB]
Get:7 bionic-security/main Translation-en [68.3 kB]
Get:8 bionic-security/universe amd64 Packages [83.9 kB]
Get:18 bionic-updates/multiverse Translation-en [3124 B]
Fetched 1871 kB in 2s (1030 kB/s)
Reading package lists… Done
Building dependency tree
Reading state information… Done
118 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@myserver:~# apt list --upgradeable
Listing… Done
118 packages can be upgraded. Run 'apt list --upgradable' to see them.

There are too many packages that can be upgraded. Let’s upgrade the packages

root@myserver:~# apt upgrade

If you are prompted to do something with the grub configuration, select to keep the local version currently installed.

An issue about ZFS and the Linux kernel

There was an issue with the original version of the Linux kernel in Ubuntu 18.04 relating to ZFS. This issue has been fixed in the updated Linux kernel, however the current Ubuntu image on defaults to an older Linux kernel version. Therefore, run the following to check the running kernel. If you get a Linux kernel version lower than 4.15.0-33-generic, then you need to upgrade. The example below shows that the kernel is old and has the bug.

root@myserver:~# uname -a
Linux myserver 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

To upgrade the Linux kernel, run the following command,

root@myserver:~# apt install -y linux-image-generic

Then, reboot the server.

Installing LXD

We have chosen Ubuntu 18.04 which means that it comes with LXD 3.0.2 in the deb repositories. There is also a snap package, but for now, deb package it is.

NOTE: At the time of writing this, LXD 3.0.2 is only available in the bionic-proposed repository. It should make it to the main repository in a day or two. The issue with the older versions (LXD 3.0.0 and 3.0.1) is that they do not perform well when creating hundreds of containers. Do not install LXD 3.0.0 or 3.0.1; if you are trying this post before the final release of LXD 3.0.2 in the bionic main repository, then use instead the snap package of LXD.

Let’s verify the LXD version and see whether the package is installed already. We can see that there is an initial LXD 3.0.0 that was first released with Ubuntu 18.04, and then there is an update to LXD 3.0.2 as part of the Ubuntu 18.04 updates (bionic-updates). When we install LXD, we are installing the updated 3.0.2 version.

root@myserver:~# apt policy lxd
Installed: (none)
Candidate: 3.0.2-0ubuntu1~18.04.1
Version table:
3.0.2-0ubuntu1~18.04.1 500
500 bionic-updates/main amd64 Packages
3.0.0-0ubuntu4 500
500 bionic/main amd64 Packages

We can now install LXD (the deb package). We are instructed to initialize LXD with lxd init. We are going to do this after we have created the non-root account.

root@myserver:~# apt install lxd
Setting up lxd (3.0.2-0ubuntu1~18.04.1) …
Created symlink /etc/systemd/system/ → /lib/systemd/system/lxd-containers.service.
Created symlink /etc/systemd/system/ → /lib/systemd/system/lxd.socket.
Setting up lxd dnsmasq configuration.
To go through the initial LXD configuration, run: lxd init
Processing triggers for systemd (237-3ubuntu10) …
Processing triggers for libc-bin (2.27-3ubuntu1) …

By installing LXD, the installer created the Unix group lxd. The non-root account needs to be a member of this group in order to fully access the LXD server. We take a note of the group lxdand are ready to create the non-root account.

Creating a non-root account

We are going to create a non-root account, and use that account for any tasks. A new account requires a password, and it’s important to check first whether the SSH server forbids password authentication. It does forbid password authentication and only allows public-key authentication, because:

root@myserver:~# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:a5fGzd19whai5fMew6plOEKkMaae8WN6QHFJpJs3TZM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
root@localhost: Permission denied (publickey).

Does /etc/ssh/sshd_config corroborate? It does. The default in OpenSSH Server is to allow PasswordAuthentication, but in this configuration it has been explicitly configured to no.

root@myserver:~# grep PasswordAuthentication /etc/ssh/sshd_config
# PasswordAuthentication yes
#  PasswordAuthentication. Depending on your PAM configuration,
#  PAM authentication, then enable this but set PasswordAuthentication
PasswordAuthentication no

We can now create the non-user account. Let’s call it ubuntu. We copy over the SSH public key that is found in the root‘s home directory, fix the permissions and finally add this new ubuntu account in the sudo (so that it can sudo to root) and lxd (so that it can access the LXD server over the Unix socket) groups.

root@myserver:~# adduser ubuntu
Adding user `ubuntu' ... Adding new group`ubuntu' (1000) …
Adding new user `ubuntu' (1000) with group`ubuntu' …
Creating home directory `/home/ubuntu' ... Copying files from`/etc/skel' …
Enter new UNIX password: *******
Retype new UNIX password: *******
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y
root@myserver:~# cp -a ~/.ssh/ ~ubuntu/
root@myserver:~# chown -R ubuntu:ubuntu ~ubuntu/.ssh/
root@myserver:~# usermod -a -G sudo,lxd ubuntu

Again, the last three commands in a way that you can easily copy and paste.

cp -a ~/.ssh/ ~ubuntu/
chown -R ubuntu:ubuntu ~ubuntu/.ssh/
usermod -a -G sudo,lxd ubuntu

Finally, let’s verify that the new account ubuntu can connect with SSH to the server. It works and we are connected.

root@myserver:~# logout
$ ssh ubuntu@
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-20-generic x86_64)

Preparing to initialize LXD

When we initialize LXD, we select the location to store the containers and also the network settings. First we deal with the storage, then with the networking.

Planning for the LXD storage

Here are the available disks, using the lsblk command. There are four disks, two at 500GB and two at 120GB. Only one of them, /dev/sdd, is in use.

ubuntu@myserver:~$ lsblk
sda 8:0 0 447,1G 0 disk
sdb 8:16 0 447,1G 0 disk
sdc 8:32 0 111,8G 0 disk
sdd 8:48 0 111,8G 0 disk
├─sdd1 8:49 0 512M 0 part /boot/efi
├─sdd2 8:50 0 1,9G 0 part [SWAP]
└─sdd3 8:51 0 109,4G 0 part /

NOTE: The order of the disks might be different for you. In the above case, the two big disks are sda and sdb. One time when I launched a new server, the two big disks were sdb and sdc.

Let’s get some idea about their performance, using the hdparm utility. Note that from the previous post with the technical details of the AMD EPYC baremetal server at (c2.medium.x86), we saw that it had two 500GB SSD disks and two 120GB M.2 (on SATA interface) storage devices. Below it shows that they are almost equal in terms of performance.

ubuntu@myserver:~$ sudo apt install hdparm
ubuntu@myserver:~$ sudo hdparm -Tt /dev/sda
Timing cached reads: 20610 MB in 2.00 seconds = 10322.25 MB/sec
Timing buffered disk reads: 1632 MB in 3.00 seconds = 543.41 MB/sec
ubuntu@myserver:~$ sudo hdparm -Tt /dev/sdc
Timing cached reads: 19066 MB in 2.00 seconds = 9548.03 MB/sec
Timing buffered disk reads: 1474 MB in 3.00 seconds = 491.20 MB/sec

What we will do, is take those two 500GB SSD disks (/dev/sda and /dev/sdb), and use them in LXD in some form of RAID setup. There are several types of RAID levels, such as joining the two disks together in order to appear as a single 1TB disk. Or, mirror the two disks so that if one fails, the other can continue to work. Read more about these at the RAID levels article at Wikipedia. Which one should we choose?

For this post, we are selecting RAID 10, a software RAID using the Linux MD driver. We use the mdadm utility to create the software RAID. We use the –create and –verbose options to create the RAID and have some verbosity respectively. The RAID device is /dev/md0, and that’s the device we will be using when we configure LXD. We are using RAID level 10, therefore we specify it with –level=10. Finally, we specify the number of devices (2) and list them (/dev/sda and /dev/sdb).

ubuntu@myserver:~$ sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=2 /dev/sda /dev/sdb
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 468719104K
mdadm: automatically enabling write-intent bitmap on large array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Note that we did not create partitions in the /dev/sda and /dev/sdb disks. We are using the ZFS filesystem, and LXD will set it up for us.

The stock Ubuntu Linux kernel of the server already has ZFS support (because the Ubuntu Linux kernels have such support). What might be missing, is the ZFS utilities. And LXD needs them. We install the package zfsutils-linux, which is the package for the ZFS utilities.

ubuntu@myserver:~$ sudo apt install -y zfsutils-linux

With the ZFS utilities installed, we can move to the next section on networking.

Planning for the networking

Let’s see what’s up with the networking. We use route -n to check the IP routing table.

ubuntu@myserver:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface UG 0 0 0 bond0 UG 0 0 0 bond0 U 0 0 0 bond0 U 0 0 0 bond0

There is a single interface, bond0, which is the already bonded network interface. These are 2 x 10Gbps bonded ports which is cool.

However, the whole network is already in use, and it’s that network that LXD will try by default to use when we initialize it. LXD will see that there is no 10.x.x.x subnet that is available, and will not be able to configure the network. The solution here is to specify to LXD some different reserved subnet and make it big enough for the amount of containers that we will be creating. We are going to use, which result to 65535 IP addresses.

We are ready to initialize LXD.

Initializing LXD

The command lxd init will initialize LXD. It’s important to run this as root (i.e. with sudo). If you don’t, you will get a series of cryptic errors.

We ran below the command sudo lxd init. We accept the default values to most settings and you can clearly see in bold those that we need to specify. Specifically, we use an existing block device for the ZFS pool (/dev/md0). If you are not prompted to select zfs, then you have omitted to install the zfsutils-linux package (in that case you can Ctrl+C, install and run again). For the IPv4 address, we type

ubuntu@myserver:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]: yes
Path to the existing block device: /dev/md0
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]: yes
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

And that’s it. LXD has been configured.

Let’s verify that everything works by creating a container and then deleting it. The container got an IPv4 address and everything looks fine. Then, we clean up by deleting the container.

ubuntu@myserver:~$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer
ubuntu@myserver:~$ lxc list -c ns4
| mycontainer | RUNNING | (eth0) |
ubuntu@myserver:~$ lxc delete --force mycontainer


We have configured LXD and verified that we are ready to create containers. The storage backend uses the ZFS filesystem, and we set it up on RAID 10 (over two 500GB SSD disks). The networking configuration allows us to create up to 16535 containers and have them obtain an IP address.

In the next post, we will be doing some LXD benchmarking.

  1. A closer look at AMD EPYC baremetal servers at
  2. Booting up the AMD EPYC baremetal server at
  3. Configuring LXD on the AMD EPYC baremetal server at (this post)
  4. Benchmarking LXD on an AMD EPYC server at

Permanent link to this article:


3 pings

    • James A. Peltier on October 12, 2018 at 03:29
    • Reply

    Why did you use MD to create the filesystem that you put ZFS onto when you could clearly have gone with a zpool instead? You are abstracting a layer that ZFS could actually be using to perform better writes to disk handling errors and performance characteristics directly within the ZFS ecosystem

    1. Thanks James for the comment.

      In this benchmark, the containers are instantiated and then deleted. Due to ZFS, if an Alpine container takes 8MB in disk space, then 1200 Alpine containers should take about 8MB of disk space in total.
      I do not expect to get a significant hit in I/O in this particular benchmark and I avoid for now the discussion on Raidz.

      Indeed, for a future benchmark with workloads, ZFS should deal directly with the actual disks.

  1. […] Configuring LXD on an AMD EPYC server at […]

  2. […] Configuring LXD on the AMD EPYC baremetal server at […]

  3. […] Configuring LXD on an AMD EPYC server at […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.