How to install LXD/LXC containers on Ubuntu on cloudscale.ch

In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.

cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.

In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.

We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.

Set up the VPS

cloudscale1

We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).

The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.

cloudscale2

We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).

cloudscale4

Here is the summary of the freshly launched VPS server. The IP address is shown as well.

Connect and update the VPS

In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.

myusername@mycomputer:~$ ssh ubuntu@5.102.145.245
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@myubuntuserver:~$

Let’s update the package list,

ubuntu@myubuntuserver:~$ sudo apt update
Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
...
Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B]
Fetched 10.5 MB in 2s (4707 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@myubuntuserver:~$ sudo apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
ubuntu@myubuntuserver:~$

In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.

Configure LXD/LXC

Let’s see how much free disk space is there,

ubuntu@myubuntuserver:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  9.7G 1.2G  8.6G 12%  /
ubuntu@myubuntuserver:~$

There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.

ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@myubuntuserver:~$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: myzfspool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...accept the network autoconfiguration settings that you will be asked...
LXD has been successfully configured.
ubuntu@myubuntuserver:~$

That’s it! We are good to go and configure our first LXD/LXC container.

Testing a container as a Web server

Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.

ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web
Creating web
Retrieving image: 100%
Starting web
ubuntu@myubuntuserver:~$

We launched a container called web.

Let’s connect to the container, update the package list and upgrade any available packages.

ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash
root@web:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
9 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@web:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@web:~#

Still inside the container, we install nginx.

root@web:~# apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@web:~#

Let’s make a small change in the default index.html,

root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html
--- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000
+++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000
@@ -1,7 +1,7 @@
 <!DOCTYPE html>
 <html>
 <head>
-<title>Welcome to nginx!</title>
+<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title>
 <style>
 body {
 width: 35em;
@@ -11,7 +11,7 @@
 </style>
 </head>
 <body>
-<h1>Welcome to nginx!</h1>
+<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1>
 <p>If you see this page, the nginx web server is successfully installed and
 working. Further configuration is required.</p>
 
root@web:/var/www/html#

Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.

root@web:/var/www/html# exit
ubuntu@myubuntuserver:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE   | IPV4                | IPV6 | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web  | RUNNING | 10.5.242.156 (eth0) |      | PERSISTENT | 0         |
+------+---------+---------------------+------+------------+-----------+
ubuntu@myubuntuserver:~$ ifconfig ens3
ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c 
 inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0
 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0
 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB)

ubuntu@myubuntuserver:~$

Therefore, the iptables command that will allow access to the container is,

ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80
ubuntu@myubuntuserver:~$

Here is the result when we visit the new Web server from our computer,

cloudscale-nginx

Benchmarks

We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.

CPU

We are benchmarking the CPU using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 10000


Test execution summary:
 total time: 10.9448s
 total number of events: 10000
 total time taken by event execution: 10.9429
 per-request statistics:
 min: 0.96ms
 avg: 1.09ms
 max: 2.79ms
 approx. 95 percentile: 1.27ms

Threads fairness:
 events (avg/stddev): 10000.0000/0.00
 execution time (avg/stddev): 10.9429/0.00

ubuntu@myubuntuserver:~$

The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.

Memory

We are benchmarking the memory using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!
Done.

Operations performed: 104857600 (1768217.45 ops/sec)

102400.00 MB transferred (1726.77 MB/sec)


Test execution summary:
 total time: 59.3013s
 total number of events: 104857600
 total time taken by event execution: 47.2179
 per-request statistics:
 min: 0.00ms
 avg: 0.00ms
 max: 0.80ms
 approx. 95 percentile: 0.00ms

Threads fairness:
 events (avg/stddev): 104857600.0000/0.00
 execution time (avg/stddev): 47.2179/0.00

ubuntu@myubuntuserver:~$

The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.

Disk

We are benchmarking the disk using dd with the following parameters.

ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s
ubuntu@myubuntuserver:~$

 

 

It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.

ZFS pool free space

Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  811M 4,18G        -  11% 15% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Again, after a second container was just created, (new and empty)

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  822M 4,17G        -  11% 16% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.

Conclusion

We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.

We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.

Finally, we see some benchmarks for the vCPU, the memory and the disk.

Permanent link to this article: https://blog.simos.info/how-to-install-lxdlxc-containers-on-ubuntu-on-cloudscale-ch/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.