NOTE: This post explains how to compile from source the ZFS support in the default Scaleway Linux kernel. It is a process that works, but makes it cumbersome because LXD also expects some other features in the Linux kernel. See https://blog.simos.info/how-to-run-the-stock-ubuntu-linux-kernel-on-scaleway-using-kexec-and-server-tags/ on a trick to run the stock Ubuntu Linux kernel in a Scaleway VPS (not baremetal yet).
Scaleway, a subsidiary of Online.net, does affordable VPSes and baremetal ARM servers. They became rather well-known when they first introduced those ARM servers.
When you install Ubuntu 16.04 on a Scaleway VPS, it requires some specific configuration (compile ZFS as DKMS module) in order to get LXD. In this post, we see those additional steps to get LXD up and running on a Scaleway VPS.
An issue with Scaleway is that they heavily modify the config of the Linux kernel and you do not get the stock Ubuntu kernel when you install Ubuntu 16.04. There is a feature request to get ZFS compiled into the kernel, at https://community.online.net/t/feature-request-zfs-support/2709/3 Most probably it will take some time to get added.
In this post I do not cover the baremetal ARM or the newer x86 dedicated servers; there is an additional error there in trying to use LXD, an error about not being able to create a sparse file.
Creating a VPS on Scaleway
Once we create an account on Scaleway (we also add our SSH public key), we click to create a VC1 server with the default settings.
There are several types of VPS, we select the VC1 which comes with 2 x86 64-bit cores, 2GB memory and 50GB disk space.
Under Security, there is a default policy to disable «SMTP». These are firewall rules drop packets destined to ports 25, 465 and 587. If you intend to use SMTP at a later date, it makes sense to disable this security policy now. Otherwise, once you get your VPS running, it takes about 30+30 minutes of downtime to archive and restart your VPS in order for this change to take effect.
Once you click Create, it takes a couple of minutes for the provisioning, for the kernel to start and then booting of the VPS.
After the creation, the administrative page shows the IP address that we need to connect to the VPS.
Initial package updates and upgrades
$ ssh firstname.lastname@example.org The authenticity of host '220.127.116.11 (18.104.22.168)' can't be established. ECDSA key fingerprint is SHA256:Z4LMCnXUyuvwO16HI763r4h5+mURBd8/4u2bFPLETes. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '22.214.171.124' (ECDSA) to the list of known hosts. _ ___ ___ __ _| | _____ ____ _ _ _ / __|/ __/ _` | |/ _ \ \ /\ / / _` | | | | \__ \ (_| (_| | | __/\ V V / (_| | |_| | |___/\___\__,_|_|\___| \_/\_/ \__,_|\__, | |___/ Welcome on Ubuntu Xenial (16.04 LTS) (GNU/Linux 4.5.7-std-3 x86_64 ) System information as of: Wed Jul 13 19:46:53 UTC 2016 System load: 0.02 Int IP Address: 10.2.46.19 Memory usage: 0.0% Pub IP Address: 126.96.36.199 Usage on /: 3% Swap usage: 0.0% Local Users: 0 Processes: 83 Image build: 2016-05-20 System uptime: 3 min Disk nbd0: l_ssd 50G Documentation: https://scaleway.com/docs Community: https://community.scaleway.com Image source: https://github.com/scaleway/image-ubuntu The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@scw-test:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB] Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB] Get:4 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB] ... Reading package lists... Done Building dependency tree Reading state information... Done 51 packages can be upgraded. Run 'apt list --upgradable' to see them. root@scw-test:~# apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: libpython3.5 The following packages will be upgraded: apt apt-utils base-files bash bash-completion bsdutils dh-python gcc-5-base grep init init-system-helpers libapt-inst2.0 libapt-pkg5.0 libblkid1 libboost-iostreams1.58.0 libboost-random1.58.0 libboost-system1.58.0 libboost-thread1.58.0 libexpat1 libfdisk1 libgnutls-openssl27 libgnutls30 libldap-2.4-2 libmount1 libnspr4 libnss3 libnss3-nssdb libpython2.7-minimal libpython2.7-stdlib librados2 librbd1 libsmartcols1 libstdc++6 libsystemd0 libudev1 libuuid1 lsb-base lsb-release mount python2.7 python2.7-minimal systemd systemd-sysv tzdata udev util-linux uuid-runtime vim vim-common vim-runtime wget 51 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 27.6 MB of archives. After this operation, 5,069 kB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 base-files amd64 9.4ubuntu4.1 [68.4 kB] Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 bash amd64 4.3-14ubuntu1.1 [583 kB] ... Setting up librados2 (10.2.0-0ubuntu0.16.04.2) ... Setting up librbd1 (10.2.0-0ubuntu0.16.04.2) ... Processing triggers for libc-bin (2.23-0ubuntu3) ... root@scw-test:~#
Installing ZFS as a DKMS module
There are instructions on how to install ZFS as a DKMS module at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module
First, we install the build-essential package,
root@scw-test:~# apt install build-essential
Second, we run the script that is provided at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module It takes about a minute for this script to run; it downloads the kernel source and prepares the modules for compilation.
Third, we install the zfsutils-linux package as usual. In this case, it takes more time to install, as it needs to recompile the ZFS modules.
root@scw-test:~# apt install zfsutils-linux
This step takes lots of time. Eight and a half minutes!
Installing the LXD package
The final step is to install the LXD package
root@scw-test:~# apt install lxd
Initial configuration of LXD
A VPS at Scaleway does not have access to a separate block device (the dedicated servers do). Therefore, we are creating the ZFS filesystem in a loopback device.
root@scw-test:~# df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda 46G 2.1G 42G 5% /
We have 42GB of free space, therefore let’s allocate 36GB for the ZFS filesystem.
root@scw-test:~# lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: mylxd-pool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 36 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes ...we accept the defaults in creating the LXD bridge... Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured. root@scw-test:~#
Create a user to manage LXD
We create a non-root user to manage LXD. It is advised to create such a user and refrain from using root for such tasks.
root@scw-test:~# adduser ubuntu Adding user `ubuntu' ... Adding new group `ubuntu' (1000) ... Adding new user `ubuntu' (1000) with group `ubuntu' ... Creating home directory `/home/ubuntu' ... Copying files from `/etc/skel' ... Enter new UNIX password: ******* Retype new UNIX password: ******* passwd: password updated successfully Changing the user information for ubuntu Enter the new value, or press ENTER for the default Full Name : Room Number : Work Phone : Home Phone : Other : Is the information correct? [Y/n] Y root@scw-test:~#
Then, let’s add this user ubuntu to the sudo (ability to run sudo) and lxd (manage LXD containers) groups,
root@scw-test:~# adduser ubuntu sudo # For scaleway. For others, the name might be 'admin'. root@scw-test:~# adduser ubuntu lxd
Finally, let’s restart the VPS. Although it is not necessary, it is a good practice in order to make sure that lxd starts automatically even with ZFS being compiled through DKMS. A shutdown -r now would suffice to restart the VPS. After about 20 seconds, we can ssh again, as the new user ubuntu.
Let’s start up a container
We log in as this new user ubuntu (or, sudo su – ubuntu).
ubuntu@scw-test:~$ lxc launch ubuntu:x mycontainer Creating mycontainer Retrieving image: 100% Starting mycontainer ubuntu@scw-test:~$ lxc list +-------------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------+---------+------+------+------------+-----------+ | mycontainer | RUNNING | | | PERSISTENT | 0 | +-------------+---------+------+------+------------+-----------+ ubuntu@scw-test:~$ lxc list +-------------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------+---------+----------------------+------+------------+-----------+ | mycontainer | RUNNING | 10.181.132.19 (eth0) | | PERSISTENT | 0 | +-------------+---------+----------------------+------+------------+-----------+ ubuntu@scw-test:~$
We launched an Ubuntu 16.04 LTS (Xenial: “x”) container, and then we listed the details. It takes a few moments for the container to boot up. In the second attempt, the container completed the booting up and also got the IP address.
That’s it! LXD is up and running, and we successfully created a container. See these instructions on how to test the container with a Web server.