Tag : container

post image

How to set up LXD on Packet.net (baremetal servers)

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
stepping : 8
microcode : 0x122
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat
bugs :
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common
 initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0
 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0
 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0
 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
 libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal
 libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2
 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv
 tcpdump udev unattended-upgrades
59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.3 MB of archives.
After this operation, 77.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ...

Configuration file '/etc/cloud/cloud.cfg'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
 What would you like to do about it ? Your options are:
 Y or I : install the package maintainer's version
 N or O : keep your currently-installed version
 D : show the differences between the versions
 Z : start a shell to examine the situation
 The default action is to keep your current version.
*** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N
Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4
root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd
lxd:
      Installed: (none)
      Candidate: 2.0.10-0ubuntu1~16.04.1
      Version table:
              2.0.10-0ubuntu1~16.04.1 500
                      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
              2.0.0-0ubuntu4 500
                      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd
...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y

root@lxd:~# ssh myusername@localhost
Permission denied (publickey).
root@lxd:~# cp -R ~/.ssh/ ~myusername/
root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

**************************************************************************
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on Amazon EC2 or one of cloud-init's known platforms that #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
# #
# If you are seeing this message, please file a bug against #
# cloud-init at #
# https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid #
# Make sure to include the cloud provider your instance is #
# running on. #
# #
# For more information see #
# https://bugs.launchpad.net/bugs/1660385 #
# #
# After you have filed a bug, you can disable this warning by #
# launching your instance with the cloud-config below, or #
# putting that content into #
# /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg #
# #
# #cloud-config #
# datasource: #
# Ec2: #
# strict_id: false #
**************************************************************************

Disable the warnings above by:
 touch /home/myusername/.cloud-warnings.skip
or
 touch /var/lib/cloud/instance/warnings/.skip
myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 136G 1.1G 128G 1% /
myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs 
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd 
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=27]: 100
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

LXD has been successfully configured.
myusername@lxd:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
myusername@lxd:~$

Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (47.99MB/s) 
Starting web 
myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS
+------+---------+---------------------+------------+-----------+
| NAME | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------------+-----------+
| web  | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+------------+-----------+
myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on Packet.net in an LXD container!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx on Packet.net in an LXD container!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig 
bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8 
      inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254
      inet6 addr: 2604:1380:2000:600::1/127 Scope:Global
      inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      RX packets:144216 errors:0 dropped:0 overruns:0 frame:0
      TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@lxd:~$

Let’s test it out!

That’s it!

post image

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

In How to run Wine (graphics-accelerated) in an LXD container on Ubuntu we had a quick look into how to run GUI programs in an LXD (Lex-Dee) container, and have the output appear on the local X11 server (your Ubuntu desktop).

In this post, we are going to see how to

  1. generalize the instructions in order to run most GUI apps in a LXD container but appear on your desktop
  2. have accelerated graphics support and audio
  3. test with Firefox, Chromium and Chrome
  4. create shortcuts to easily launch those apps

The benefits in running GUI apps in a LXD container are

  • clear separation of the installation data and settings, from what we have on our desktop
  • ability to create a snapshot of this container, save, rollback, delete, recreate; all these in a few seconds or less
  • does not mess up your installed package list (for example, all those i386 packages for Wine, Google Earth)
  • ability to create an image of such a perfect container, publish, and have others launch in a few clicks

What we are doing today is similar to having a Virtualbox/VMWare VM and running a Linux distribution in it. Let’s compare,

  • It is similar to the Virtualbox Seamless Mode or the VMWare Unity mode
  • A VM virtualizes a whole machine and has to do a lot of work in order to provide somewhat good graphics acceleration
  • With a container, we directly reuse the graphics card and get graphics acceleration
  • The specific set up we show today, can potential allow a container app to interact with the desktop apps (TODO: show desktop isolation in future post)

Browsers have started having containers and specifically in-browser containers. It shows a trend towards containers in general, it is browser-specific and is dictated by usability (passwords, form and search data are shared between the containers).

In the following, our desktop computer will called the host, and the LXD container as the container.

Setting up LXD

LXD is supported in Ubuntu and derivatives, as well as other distributions. When you initially set up LXD, you select where to store the containers. See LXD 2.0: Installing and configuring LXD [2/12] about your options. Ideally, if you select to pre-allocate disk space or use a partition, select at least 15GB but preferably more.

If you plan to play games, increase the space by the size of that game. For best results, select ZFS as the storage backend, and place the space on an SSD disk. Also Trying out LXD containers on our Ubuntu may help.

Creating the LXD container

Let’s create the new container for LXD. We are going to call it guiapps, and install Ubuntu 16.04 in it. There are options for other Ubuntu versions, and even other distributions.

$ lxc launch ubuntu:x guiapps
Creating guiapps
Starting guiapps
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| guiapps       | RUNNING | 10.0.185.204(eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called guiapps.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. The second is glxinfo, that shows details about graphics acceleration. The third, glxgears, a minimal graphics-accelerated application. The fourth is speaker-test, to test for audio. We will know that our set up works, if all three xclock, glxinfo, glxgears and speaker-test work in the container!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ sudo apt update
ubuntu@guiapps:~$ sudo apt install x11-apps
ubuntu@guiapps:~$ sudo apt install mesa-utils
ubuntu@guiapps:~$ sudo apt install alsa-utils
ubuntu@guiapps:~$ exit $

We execute a login shell in the guiapps container as user ubuntu, the default non-root user account in all Ubuntu LXD images. Other distribution images probably have another default non-root user account.

Then, we run apt update in order to update the package list and be able to install the subsequent three packages that provide xclock, glxinfo and glxgears, and speaker-test (or aplay). Finally, we exit the container.

Mapping the user ID of the host to the container (PREREQUISITE)

In the following steps we will be sharing files from the host (our desktop) to the container. There is the issue of what user ID will appear in the container for those shared files.

First, we run on the host (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our guiapps LXD container, and restart the container for the change to take effect.

$ lxc config set guiapps raw.idmap "both $UID 1000"
$ lxc restart guiapps
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Configuring graphics and graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

Let’s attempt to run xclock in the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ xclock
Error: Can't open display: 
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock
Error: Can't open display: :0
ubuntu@guiapps:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add guiapps X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add guiapps Xauthority disk path=${XAUTHORITY} source=/home/${USER}/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container. For the host, this file can be found in the variable $XAUTHORITY and should be either at ~/.Xauthority or /run/myusername/1000/gdm/Xauthority. Obviously, we can set correctly the path= part, however the distribution in the container needs to be able to find the .Xauthority in the given location.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add guiapps mygpu gpu
$ lxc config device set guiapps mygpu uid 1000
$ lxc config device set guiapps mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). In addition to gpu device, we also set the permissions accordingly so that the device is fully accessible in  the container. The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running). Note that for Intel GPUs (my case), you may not need to add this device.

Let’s see what we got now.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock

ubuntu@guiapps:~$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Intel Open Source Technology Center (0x8086)
...
ubuntu@guiapps:~$ glxgears 

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
345 frames in 5.0 seconds = 68.783 FPS
309 frames in 5.0 seconds = 61.699 FPS
300 frames in 5.0 seconds = 60.000 FPS
^C
ubuntu@guiapps:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@guiapps:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Configuring audio

The audio server in Ubuntu desktop is Pulseaudio, and Pulseaudio has a feature to allow authenticated access over the network. Just like the X11 server and what we did earlier. Let’s do this.

We install the paprefs (PulseAudio Preferences) package on the host.

$ sudo apt install paprefs
...
$ paprefs

This is the only option we need to enable (by default all other options are not check and can remain unchecked).

That is, under the Network Server tab, we tick Enable network access to local sound devices.

Then, just like with the X11 configuration, we need to deal with two things; the access to the Pulseaudio server of the host (either through a Unix socket or an IP address), and some way to get authorization to access the Pulseaudio server. Regarding the Unix socket of the Pulseaudio server, it is a bit of hit and miss (could not figure out how to use reliably), so we are going to use the IP address of the host (lxdbr0 interface).

First, the IP address of the host (that has Pulseaudio) is the IP of the lxdbr0 interface, or the default gateway (ip link show). Second, the authorization is provided through the cookie in the host at /home/${USER}/.config/pulse/cookie Let’s connect these to files inside the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ echo export PULSE_SERVER="tcp:`ip route show 0/0 | awk '{print $3}'`" >> ~/.profile

This command will automatically set the variable PULSE_SERVER to a value like tcp:10.0.185.1, which is the IP address of the host, for the lxdbr0 interface. The next time we log in to the container, PULSE_SERVER will be configured properly.

ubuntu@guiapps:~$ mkdir -p ~/.config/pulse/
ubuntu@guiapps:~$ echo export PULSE_COOKIE=/home/ubuntu/.config/pulse/cookie >> ~/.profile
ubuntu@guiapps:~$ exit
$ lxc config device add guiapps PACookie disk path=/home/ubuntu/.config/pulse/cookie source=/home/${USER}/.config/pulse/cookie

Now, this is a tough cookie. By default, the Pulseaudio cookie is found at ~/.config/pulse/cookie. The directory tree ~/.config/pulse/ does not exist, and if we do not create it ourselves, then lxd config will autocreate it with the wrong ownership. So, we create it (mkdir -p), then add the correct PULSE_COOKIE line in the configuration file ~/.profile. Finally, we exit from the container and mount-bind the cookie from the host to the container. When we log in to the container again, the cookie variable will be correctly set!

Let’s test the audio!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@pulseaudio:~$ speaker-test -c6 -twav

speaker-test 1.1.0

Playback device is default
Stream parameters are 48000Hz, S16_LE, 6 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 32 to 349525
Period size range from 10 to 116509
Using max buffer size 349524
Periods = 4
was set period_size = 87381
was set buffer_size = 349524
 0 - Front Left
 4 - Center
 1 - Front Right
 3 - Rear Right
 2 - Rear Left
 5 - LFE
Time per period = 8.687798 ^C
ubuntu@pulseaudio:~$

If you do not have 6-channel audio output, you will hear audio on some of the channels only.

Let’s also test with an MP3 file, like that one from https://archive.org/details/testmp3testfile

ubuntu@pulseaudio:~$ sudo apt install mpg123
...
ubuntu@pulseaudio:~$ wget https://archive.org/download/testmp3testfile/mpthreetest.mp3
...
ubuntu@pulseaudio:~$ mplayer mpthreetest.mp3 
MPlayer 1.2.1 (Debian), built with gcc-5.3.1 (C) 2000-2016 MPlayer Team
...
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A:   3.7 (03.7) of 12.0 (12.0)  0.2% 

Exiting... (Quit)
ubuntu@pulseaudio:~$

All nice and loud!

Troubleshooting sound issues

AO: [pulse] Init failed: Connection refused

An application tries to connect to a PulseAudio server, but no PulseAudio server is found (either none autodetected, or the one we specified is not really there).

AO: [pulse] Init failed: Access denied

We specified a PulseAudio server, but we do not have access to connect to it. We need a valid cookie.

AO: [pulse] Init failed: Protocol error

You were trying as well to make the Unix socket work, but something was wrong. If you can make it work, write a comment below.

Testing with Firefox

Let’s test with Firefox!

ubuntu@guiapps:~$ sudo apt install firefox
...
ubuntu@guiapps:~$ firefox 
Gtk-Message: Failed to load module "canberra-gtk-module"

We get a message that the GTK+ module is missing. Let’s close Firefox, install the module and start Firefox again.

ubuntu@guiapps:~$ sudo apt-get install libcanberra-gtk3-module
ubuntu@guiapps:~$ firefox

Here we are playing a Youtube music video at 1080p. It works as expected. The Firefox session is separated from the host’s Firefox.

Note that the theming is not exactly what you get with Ubuntu. This is due to the container being so lightweight that it does not have any theming support.

The screenshot may look a bit grainy; this is due to some plugin I use in WordPress that does too much compression.

You may notice that no menubar is showing. Just like with Windows, simply press the Alt key for a second, and the menu bar will appear.

Testing with Chromium

Let’s test with Chromium!

ubuntu@guiapps:~$ sudo apt install chromium-browser
ubuntu@guiapps:~$ chromium-browser
Gtk-Message: Failed to load module "canberra-gtk-module"

So, chromium-browser also needs a libcanberra package, and it’s the GTK+ 2 package.

ubuntu@guiapps:~$ sudo apt install libcanberra-gtk-module
ubuntu@guiapps:~$ chromium-browser

There is no menubar and there is no easy way to get to it. The menu on the top-right is available though.

Testing with Chrome

Let’s download Chrome, install it and launch it.

ubuntu@guiapps:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
...
ubuntu@guiapps:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb
...
Errors were encountered while processing:
 google-chrome-stable
ubuntu@guiapps:~$ sudo apt install -f
...
ubuntu@guiapps:~$ google-chrome
[11180:11945:0503/222317.923975:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[11180:11945:0503/222317.924441:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
^C
ubuntu@guiapps:~$ sudo apt install upower
ubuntu@guiapps:~$ google-chrome

There are these two errors regarding UPower and they go away when we install the upower package.

Creating shortcuts to the container apps

If we want to run Firefox from the container, we can simply run

$ lxc exec guiapps -- sudo --login --user ubuntu firefox

and that’s it.

To make a shortcut, we create the following file on the host,

$ cat > ~/.local/share/applications/lxd-firefox.desktop[Desktop Entry]
Version=1.0
Name=Firefox in LXD
Comment=Access the Internet through an LXD container
Exec=/usr/bin/lxc exec guiapps -- sudo --login --user ubuntu firefox %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ chmod +x ~/.local/share/applications/lxd-firefox.desktop

We need to make it executable so that it gets picked up and we can then run it by double-clicking.

If it does not appear immediately in the Dash, use your File Manager to locate the directory ~/.local/share/applications/

This is how the icon looks like in a File Manager. The icon comes from the high-contrast set, which now I remember that it means just two colors 🙁

Here is the app on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the app at your fingertips.

I hope the tutorial was useful. We explain the commands in detail. In a future tutorial, we are going to try to figure out how to automate these!

post image

How to run Wine (graphics-accelerated) in an LXD container on Ubuntu

Update #1: Added info about adding the gpu configuration device to the container, for hardware acceleration to work (required for some users).

Update #2: Added info about setting the permissions for the gpu device.

Wine lets you run Windows programs on your GNU/Linux distribution.

When you install Wine, it adds all sort of packages, including 32-bit packages. It looks quite messy, could there be a way to place all those Wine files in a container and keep them there?

This is what we are going to see today. Specifically,

  1. We are going to create an LXD container, called wine-games
  2. We are going to set it up so that it runs graphics-accelerated programs. glxinfo will show the host GPU details.
  3. We are going to install the latest Wine package.
  4. We are going to install and play one of those Windows games.

Creating the LXD container

Let’s create the new container for LXD. If this is the first time you use LXD, have a look at Trying out LXD containers on our Ubuntu.

$ lxc launch ubuntu:x wine-games
Creating wine-games
Starting wine-games
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| wine-games    | RUNNING | 10.0.185.63 (eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called wine-games.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. And glxinfo, that shows details about graphics acceleration. We will know that our set up in Wine works, if both xclock and glxinfo work in the container!

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo apt update
ubuntu@wine-games:~$ sudo apt install x11-apps
ubuntu@wine-games:~$ sudo apt install mesa-utils
ubuntu@wine-games:~$ exit
$

We execute a login shell in the wine-games container as user ubuntu, the default non-root username in Ubuntu LXD images.

Then, we run apt update in order to update the package list and be able to install the subsequent two packages that provide xclock and glxinfo respectively. Finally, we exit the container.

Setting up for graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

First, we run (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command adds a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our wine-games LXD container, and restart the container for the change to take effect.

$ lxc config set wine-games raw.idmap "both $UID 1000"
$ lxc restart wine-games
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Let’s attempt to run xclock in the container.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ xclock
Error: Can't open display: 
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock
Error: Can't open display: :0
ubuntu@wine-games:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add wine-games X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add wine-games Xauthority disk path=/home/ubuntu/.Xauthority source=/home/MYUSERNAME/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add wine-games mygpu gpu
$ lxc config device set wine-games mygpu uid 1000
$ lxc config device set wine-games mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). [UPDATED] In addition, we set the uid/gui of the gpu device to 1000 (the default uid/gid of the first non-root account on Ubuntu; adapt accordingly on other distributions). The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running).

Let’s see what we got now.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock

ubuntu@wine-games:~$ glxinfo 
name of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
ubuntu@wine-games:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@wine-games:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Installing Wine

We install Wine in the container according to the instructions at https://wiki.winehq.org/Ubuntu.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo dpkg --add-architecture i386 
ubuntu@wine-games:~$ wget https://dl.winehq.org/wine-builds/Release.key
--2017-05-01 21:30:14--  https://dl.winehq.org/wine-builds/Release.key
Resolving dl.winehq.org (dl.winehq.org)... 151.101.112.69
Connecting to dl.winehq.org (dl.winehq.org)|151.101.112.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [application/pgp-keys]
Saving to: ‘Release.key’

Release.key                100%[=====================================>]   3.05K  --.-KB/s    in 0s      

2017-05-01 21:30:15 (24.9 MB/s) - ‘Release.key’ saved [3122/3122]

ubuntu@wine-games:~$ sudo apt-key add Release.key
OK
ubuntu@wine-games:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine-games:~$ sudo apt-get update
...
Reading package lists... Done
ubuntu@wine-games:~$ sudo apt-get install --install-recommends winehq-devel
...
Need to get 115 MB of archives.
After this operation, 715 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
ubuntu@wine-games:~$

715MB?!? Sure, bring it on. Whatever is installed in the container, stays in the container! 🙂

Let’s run a game in the container

Here is a game that looks good for our test, Season Match 4. Let’s play it.

ubuntu@wine-games:~$ wget http://cdn.gametop.com/free-games-download/Season-Match4.exe
ubuntu@wine-games:~$ wine Season-Match4.exe 
...
ubuntu@wine-games:~$ cd .wine/drive_c/Program\ Files\ \(x86\)/GameTop.com/Season\ Match\ 4/
ubuntu@wine-games:~/.wine/drive_c/Program Files (x86)/GameTop.com/Season Match 4$ wine SeasonMatch4.exe

Here is the game, and it works.It runs full screen and it is a bit weird to navigate between windows. The animations though are smooth.

We did not set up sound either in this post, nor did we make nice shortcuts so that we can run these apps with a single click. That’s material for a future tutorial!

post image

A closer look at the new ARM64 Scaleway servers and LXD

Update #1: I posted at the Scaleway Linux kernel discussion thread to add support for the Ubuntu Linux kernel and Add new bootscript with stock Ubuntu Linux kernel #349.

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail
2446
ubuntu@scw-ubuntu-arm64:~$

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$

Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

post image

How to install neo4j in an LXD container on Ubuntu

Neo4j is a different type of database, is a graph database. It is quite cool and it is worth to spend the time to learn how it works.

The main benefit of a graph database is that the information is interconnected as a graph, and allows to execute complex queries very quickly.

One of the sample databases in Neo4j is (big part of) the content of IMDb.com (the movie database). Here is the description of some possible queries:

  • Find actors who worked with Gene Hackman, but not when he was also working with Robin Williams in the same movie.
  • Who are the five busiest actors?
  • Return the count of movies in which an actor and director have jointly worked

In this post

  1. we install Neo4j in an LXD container on Ubuntu (or any other GNU/Linux distribution that has installation packages)
  2. set it up so we can access Neo4j from our Ubuntu desktop browser
  3. start the cool online tutorial for Neo4j, which you can complete on your own
  4. remove the container (if you really wish!) in order to clean up the space

Creating an LXD container

See Trying out LXD containers on our Ubuntu in order to make the initial (one-time) configuration of LXD on your Ubuntu  desktop.

Then, let’s start with creating a container for neo4j:

$ lxc launch ubuntu:x neo4j
Creating neo4j
Starting neo4j
$ _

Here we launched a container named neo4j, that runs Ubuntu 16.04 (Xenial, hence ubuntu:x).

Let’s see the container details:

$ lxc list
+----------+---------+----------------------+------+------------+-----------+
| NAME     | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
| neo4j    | RUNNING | 10.60.117.91 (eth0)  |      | PERSISTENT | 0         |
+----------+---------+----------------------+------+------------+-----------+
$ _

It takes a few seconds for a new container to launch. Here, the container is in the RUNNING state, and also has a private IP address. It’s good to go!

Connecting to the LXD container

Let’s get a shell in the new neo4j LXD container.

$ lxc exec neo4j -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@neo4j:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] 
Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] 
Get:5 http://archive.ubuntu.com/ubuntu xenial/main Sources [868 kB] 
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main Sources [61.1 kB]
Get:7 http://security.ubuntu.com/ubuntu xenial-security/restricted Sources [2,288 B]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [4,808 B] 
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe Sources [7,728 kB] 
Get:10 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [20.9 kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/multiverse Sources [1,148 B]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [219 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [92.0 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [79.1 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [43.9 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/multiverse Sources [179 kB] 
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [234 kB] 
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2,688 B] 
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [134 kB] 
Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse Sources [4,556 B] 
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [485 kB] 
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [193 kB] 
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [411 kB] 
Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [155 kB] 
Get:25 http://archive.ubuntu.com/ubuntu xenial-backports/main Sources [3,200 B] 
Get:26 http://archive.ubuntu.com/ubuntu xenial-backports/universe Sources [1,868 B] 
Get:27 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [4,672 B] 
Get:28 http://archive.ubuntu.com/ubuntu xenial-backports/main Translation-en [3,200 B] 
Get:29 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [2,512 B] 
Get:30 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [1,216 B] 
Fetched 11.2 MB in 8s (1,270 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@neo4j:~$ exit
logout
$ _

The command we used to get a shell is this, sudo –login –user ubuntu

We instructed to execute (exec) in the neo4j container the command that appears after the separator.

The images for the LXD containers have both a root account and also a user account, in the Ubuntu images is called ubuntu. Both accounts are locked (no default password is available). lxc exec runs the commands in the containers as root, therefore, the sudo –login –user ubuntu command would obviously run without asking passwords. This sudo command creates a login shell for the specified user, user ubuntu.

Once we are connected to the container as user ubuntu, we can then run commands as root simply by using sudo in front of them. Since user ubuntu is in /etc/sudoers, no password is asked. That is the reason why sudo apt update was ran earlier without asking a password.

The Ubuntu LXD containers auto-update themselves by running unattended-upgrade, which means that we do not need to run sudo apt upgrade. We do run sudo apt update just to get an updated list of available packages and avoid any errors when installing, just because the package list was changed recently.

After we updated the package list, we exit the container with exit.

Installing neo4j

This is the download page for Neo4j, https://neo4j.com/download/ and we click to get the community edition.

We download Neo4j (the Linux(tar)) on our desktop. When we tried this, version 3.1.1 was available.

We downloaded the file and it can be found in ~/Downloads/ (or the localized name). Let’s copy it to the container,

$ cd ~Downloads/
$ ls -l neo4j-community-3.1.1-unix.tar.gz 
-rw-rw-r-- 1 user user 77401077 Mar 1 16:04 neo4j-community-3.1.1-unix.tar.gz
$ lxc file push neo4j-community-3.1.1-unix.tar.gz neo4j/home/ubuntu/
$ _

The tarball is about 80MB and we use lxc file push to copy it inside the neo4j container, in the directory /home/ubuntu/. Note that neo4j/home/ubuntu/ ends with a / character to specify that it is a directory. If you omit this, you get an error.

Let’s deal with the tarball inside the container,

$ lxc exec neo4j -- sudo --login --user ubuntu
ubuntu@neo4j:~$ ls -l
total 151401
-rw-rw-r-- 1 ubuntu ubuntu 77401077 Mar  1 12:00 neo4j-community-3.1.1-unix.tar.gz
ubuntu@neo4j:~$ tar xfvz neo4j-community-3.1.1-unix.tar.gz 
neo4j-community-3.1.1/
neo4j-community-3.1.1/bin/
neo4j-community-3.1.1/data/
neo4j-community-3.1.1/data/databases/

[...]
neo4j-community-3.1.1/lib/mimepull-1.9.3.jar
ubuntu@neo4j:~$

The files are now in the container, let’s run this thing!

Running Neo4j

The commands to manage Neo4j are in the bin/ subdirectory,

ubuntu@neo4j:~$ ls -l neo4j-community-3.1.1/bin/
total 24
-rwxr-xr-x 1 ubuntu ubuntu 1624 Jan  5 12:03 cypher-shell
-rwxr-xr-x 1 ubuntu ubuntu 7454 Jan 17 17:52 neo4j
-rwxr-xr-x 1 ubuntu ubuntu 1180 Jan 17 17:52 neo4j-admin
-rwxr-xr-x 1 ubuntu ubuntu 1159 Jan 17 17:52 neo4j-import
-rwxr-xr-x 1 ubuntu ubuntu 5120 Jan 17 17:52 neo4j-shared.sh
-rwxr-xr-x 1 ubuntu ubuntu 1093 Jan 17 17:52 neo4j-shell
drwxr-xr-x 2 ubuntu ubuntu    4 Mar  1 12:02 tools
ubuntu@neo4j:~$

According to Running Neo4j, we need to run “neo4j start“. Let’s do it.

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
ERROR: Unable to find Java executable.
* Please use Oracle(R) Java(TM) 8, OpenJDK(TM) or IBM J9 to run Neo4j Server.
* Please see http://docs.neo4j.org/ for Neo4j Server installation instructions.
ubuntu@neo4j:~$

We need Java, and the documentation actually said so. We just need the headless JDK, since we are accessing the UI from our browser.

ubuntu@neo4j:~$ sudo apt install default-jdk-headless
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 ca-certificates-java default-jre-headless fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1 libfreetype6 libjpeg-turbo8 libjpeg8
 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libpcsclite1 libxi6 libxrender1 libxtst6 openjdk-8-jdk-headless openjdk-8-jre-headless x11-common
[...]
Setting up default-jdk-headless (2:1.8-56ubuntu2) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@neo4j:~$

Now, we are ready to start Neo4j!

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
Starting Neo4j.
Started neo4j (pid 4123). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
nohup: ignoring input
2017-03-01 12:22:41.060+0000 INFO  No SSL certificate found, generating a self-signed certificate..
2017-03-01 12:22:41.517+0000 INFO  Starting...
2017-03-01 12:22:41.914+0000 INFO  Bolt enabled on localhost:7687.
2017-03-01 12:22:43.622+0000 INFO  Started.
2017-03-01 12:22:44.375+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$

So, Neo4j is running just fine, but it has been bound on the localhost which makes it inaccessible to our desktop browser. Let’s verify again,

ubuntu@neo4j:~$ sudo lsof -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4123 ubuntu  210u  IPv6 121981      0t0  TCP localhost:7687 (LISTEN)
java     4123 ubuntu  212u  IPv6 121991      0t0  TCP localhost:7474 (LISTEN)
java     4123 ubuntu  220u  IPv6 121072      0t0  TCP localhost:7473 (LISTEN)
ubuntu@neo4j:~$

What we actually need, is for Neo4j to bind to all network interfaces so that it becomes accessible to our desktop browser. Being in a container, the all network interfaces means to bind the only other interface, the private network that LXD created for us:

ubuntu@neo4j:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3e:48:b7:85  
          inet addr:10.60.117.21  Bcast:10.60.117.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe48:b785/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30029 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17553 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:51824701 (51.8 MB)  TX bytes:1193503 (1.1 MB)
ubuntu@neo4j:~$

Where do we look in the configuration files of Neo4j to get it to bind to all network interfaces?

We look at the Neo4j documentation on Configuring the connectors, and we see that we need to edit the configuration file neo4j-community-3.1.1/conf/neo4j.conf

We can see that there is an overall configuration parameter for the connectors, and we can set the default_listen_address to 0.0.0.0. This 0.0.0.0 in networking terms means that we want the process to bind to all networking interfaces. Let’s remind us here that in our case of LXD containers that reside in private networking, this is OK.

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
#dbms.connectors.default_listen_address=0.0.0.0

to

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0

Let’s restart Neo4j and check whether it looks OK:

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j restart
Stopping Neo4j.. stopped
Starting Neo4j.
Started neo4j (pid 4711). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
2017-03-01 12:57:52.839+0000 INFO  Started.
2017-03-01 12:57:53.624+0000 INFO  Remote interface available at http://localhost:7474/
2017-03-01 13:01:58.566+0000 INFO  Neo4j Server shutdown initiated by request
2017-03-01 13:01:58.575+0000 INFO  Stopping...
2017-03-01 13:01:58.620+0000 INFO  Stopped.
nohup: ignoring input
2017-03-01 13:02:00.088+0000 INFO  Starting...
2017-03-01 13:02:01.310+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2017-03-01 13:02:02.928+0000 INFO  Started.
2017-03-01 13:02:03.599+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$ sudo lsof -n -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4711 ubuntu  210u  IPv6 153406      0t0  TCP *:7687 (LISTEN)
java     4711 ubuntu  212u  IPv6 153415      0t0  TCP *:7474 (LISTEN)
java     4711 ubuntu  220u  IPv6 153419      0t0  TCP *:7473 (LISTEN)
ubuntu@neo4j:~$

The log messages still say that Neo4j is accessible at http://localhost:7474/, which is factually correct. However, lsof shows us that it is boung to all network interfaces (the * means all).

Loading up Neo4j in the browser

We know already that in our case, the private IP address of the neo4j LXD container is 10.60.117.21. Let’s visit http://10.60.117.21:7474/ on our desktop Web browser!

It works! It asks us to log in using the default username neo4j with the default password neo4j. Then, it will ask us to change the password to something else and we are presented with the initial page of Neo4j,

The $ ▊ prompt is there for you to type instructions. According to the online tutorial at https://neo4j.com/graphacademy/online-training/introduction-graph-databases/

you can start the tutorial if you type :play movie graph and press the Run button. Therefore, load on one browser tab the tutorial and on other tab run the commands in the neo4j server from the LXD container!

Once you are done

Once you have completed the tutorial, you can keep the container in order to try out more tutorials and learn more about neo4j.

However, if you want to remove this LXD container, it can be done by running:

$ lxc stop neo4j
$ lxc delete neo4j
$ lxc list
+----------+---------+----------------------+------+------------+-----------+
|   NAME   |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
+----------+---------+----------------------+------+------------+-----------+
$ _

That’s it. The container is gone and LXD is ready for you to follow more LXD tutorials and create more containers!

post image

How to set up multiple secure (SSL/TLS, Qualys SSL Labs A+) websites using LXD containers

In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.

In this post, we are going to

  1. Create multiple websites, each in a separate LXD container
  2. Install HAProxy as a TLS Termination Proxy, in an LXD container
  3. Configure HAProxy so that each website is only accessible through TLS
  4. Perform the SSL Server Test so that our websites really get the A+!

In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.

The requirements are

Set up a VPS

We are using DigitalOcean in this example.

do-create-droplet-16041

Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.

We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!

In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.

Let’s click on the Create droplet button and the VPS is created!

Initial configuration

We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.

https://blog.simos.info/trying-out-lxd-containers-on-ubuntu-on-digitalocean/

Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».

Creating the containers

We create three containers for three websites, plus one container for HAProxy.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1
Creating web1
Retrieving image: 100%
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2
Creating web2

real    0m6.620s
user    0m0.016s
sys    0m0.004s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3
Creating web3

real    1m15.723s
user    0m0.012s
sys    0m0.020s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy
Creating haproxy

real    0m48.747s
user    0m0.012s
sys    0m0.012s
ubuntu@ubuntu-512mb-ams3-01:~$

Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.

Let’s start the containers up!

ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy
ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$

You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.

DNS configuration

The public IP address of this specific VPS is 188.166.10.229. For this test, I am using the domain ubuntugreece.xyz as follows:

  1. Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP 188.166.10.229
  2. Container web2: web2.ubuntugreece.xyz has IP 188.166.10.229
  3. Container web3: web3.ubuntugreece.xyz has IP 188.166.10.229

Here is how it looks when configured on a DNS management console,

namecheap-configuration-containers

From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.

ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz
ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz
Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN)
ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz
web3.ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$

These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.

We can continue! (and ignore for now web2)

Web server configuration

Let’s see the configuration for web1. You must repeat the following for web2 and web3.

We install the nginx web server,

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done

Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
root@web1:~#

nginx needs to be configured so that it understands the domain name for web1. Here is the diff,

diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default
index a761605..b2cea8f 100644
--- a/etc/nginx/sites-available/default
+++ b/etc/nginx/sites-available/default
@@ -38,7 +38,7 @@ server {
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
 
-       server_name _;
+       server_name ubuntugreece.xyz www.ubuntugreece.xyz;
 
        location / {
                # First attempt to serve request as file, then

and finally we restart nginx and exit the web1 container,

root@web1:/etc/nginx/sites-enabled# systemctl restart nginx
root@web1:/etc/nginx/sites-enabled# exit
exit
ubuntu@ubuntu-512mb-ams3-01:~$

Forwarding connections to the HAProxy container

We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.

ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 04:01:36:50:00:01  
          inet addr:188.166.10.229  Bcast:188.166.63.255  Mask:255.255.192.0
          inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360767509 (360.7 MB)  TX bytes:3863846 (3.8 MB)

ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 80 -j DNAT --to-destination 10.234.150.39:80
[sudo] password for ubuntu: 
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 443 -j DNAT --to-destination 10.234.150.39:443
ubuntu@ubuntu-512mb-ams3-01:~$

If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).

HAProxy initial configuration

Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash
root@haproxy:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@haproxy:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@haproxy:~# apt install haproxy
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
root@haproxy:~#

We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.

diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg
index 86da67d..f6f2577 100644
--- a/etc/haproxy/haproxy.cfg
+++ b/etc/haproxy/haproxy.cfg
@@ -18,11 +18,17 @@ global
     ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
     ssl-default-bind-options no-sslv3
 
+        # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
+        # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
+        tune.ssl.default-dh-param 2048
+
 defaults
     log    global
     mode    http
     option    httplog
     option    dontlognull
+        option  forwardfor
+        option  http-server-close
         timeout connect 5000
         timeout client  50000
         timeout server  50000
@@ -33,3 +39,56 @@ defaults
     errorfile 502 /etc/haproxy/errors/502.http
     errorfile 503 /etc/haproxy/errors/503.http
     errorfile 504 /etc/haproxy/errors/504.http
+
+# Configuration of the frontend (HAProxy as a TLS Termination Proxy)
+frontend www_frontend
+    # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
+    bind *:80
+    # We bind on port 443 (https) and specify a directory with the certificates.
+####    bind *:443 ssl crt /etc/haproxy/certs/
+    # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
+####    redirect scheme https if !{ ssl_fc }
+    # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx
+    # that there was a TLS Termination Proxy. Required for WordPress and other CMS.
+    reqadd X-Forwarded-Proto:\ https
+
+    # Distinguish between secure and insecure requestsa (used in next two lines)
+    acl secure dst_port eq 443
+
+    # Mark all cookies as secure if sent over SSL
+    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
+
+    # Add the HSTS header with a 1 year max-age
+    rspadd Strict-Transport-Security:\ max-age=31536000 if secure
+
+    # Configuration for each virtual host (uses Server Name Indication, SNI)
+    acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz
+    acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz
+    acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz
+
+    # Directing the connection to the correct LXD container
+    use_backend web1_cluster if host_ubuntugreece_xyz
+    use_backend web2_cluster if host_web2_ubuntugreece_xyz
+    use_backend web3_cluster if host_web3_ubuntugreece_xyz
+
+# Configuration of the backend (HAProxy as a TLS Termination Proxy)
+backend web1_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web1", directs to container "web1.lxd" (hostname).
+    server web1 web1.lxd:80 check
+
+backend web2_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web2", directs to container "web2.lxd" (hostname).
+    server web2 web2.lxd:80 check
+
+backend web3_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web3", directs to container "web3.lxd" (hostname).
+    server web3 web3.lxd:80 check

Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.

root@haproxy:~# systemctl restart haproxy
root@haproxy:~# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Does it work? Let’s visit the website,

do-ubuntugreece

It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.

Get certificates from Let’s Encrypt

We exit out to the VPS and install letsencrypt.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt
[sudo] password for ubuntu: 
Reading package lists... Done
...
Setting up python-pyicu (1.9.2-2build1) ...
ubuntu@ubuntu-512mb-ams3-01:~$

We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz
... they ask for a contact e-mail address and whether we accept the Terms of Service...

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to xxxxx@gmail.com.
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert
   will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$

For completeness, here are the command lines for the other two websites,

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le


real    0m18.458s
user    0m0.852s
sys    0m0.172s
ubuntu@ubuntu-512mb-ams3-01:~$

Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!

We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$

HAProxy final configuration

We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash
root@haproxy:~# vi /etc/haproxy/haproxy.cfg 

haproxy-config-ok
root@haproxy:/etc/haproxy# systemctl restart haproxy
root@haproxy:/etc/haproxy# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Let’s test them!

Here are the three websites, notice the padlocks on all three of them,

The SSL Server Report (Qualys)

Here are the SSL Server Reports for each website,

You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.

Results

The disk space requirements for those four containers (three static websites plus haproxy) are

ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list
[sudo] password for ubuntu: 
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool-lxd  14.9G  1.13G  13.7G         -     4%     7%  1.00x  ONLINE  -
ubuntu@ubuntu-512mb-ams3-01:~$

The four containers required a bit over 1GB of disk space.

The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.

ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory"
[  181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child
[  183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child
[  190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child
[  848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child
[  860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child
[  878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child
ubuntu@ubuntu-512mb-ams3-01:~$

There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.

Summary

We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.

We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.

We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.

The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.

 

post image

How to install LXD containers on Ubuntu on Scaleway

Scaleway, a subsidiary of Online.net, does affordable VPSes and baremetal ARM servers. They became rather well-known when they first introduced those ARM servers.

When you install Ubuntu 16.04 on a Scaleway VPS, it requires some specific configuration (compile ZFS as DKMS module) in order to get LXD. In this post, we see those additional steps to get LXD up and running on a Scaleway VPS.

An issue with Scaleway is that they heavily modify the config of the Linux kernel and you do not get the stock Ubuntu kernel when you install Ubuntu 16.04. There is a feature request to get ZFS compiled into the kernel, at https://community.online.net/t/feature-request-zfs-support/2709/3 Most probably it will take some time to get added.

In this post I do not cover the baremetal ARM or the newer x86 dedicated servers; there is an additional error there in trying to use LXD, an error about not being able to create a sparse file.

Creating a VPS on Scaleway

Once we create an account on Scaleway (we also add our SSH public key), we click to create a VC1 server with the default settings.

scaleway-vc1

There are several types of VPS, we select the VC1 which comes with 2 x86 64-bit cores, 2GB memory and 50GB disk space.

scaleway-do-no-block-SMTP

Under Security, there is a default policy to disable «SMTP». These are firewall rules drop packets destined to ports 25, 465 and 587. If you intend to use SMTP at a later date, it makes sense to disable this security policy now. Otherwise, once you get your VPS running, it takes about 30+30 minutes of downtime to archive and restart your VPS in order for this change to take effect.

scaleway-provisioning

Once you click Create, it takes a couple of minutes for the provisioning, for the kernel to start and then booting of the VPS.

After the creation, the administrative page shows the IP address that we need to connect to the VPS.

Initial package updates and upgrades

$ ssh root@163.172.132.19
The authenticity of host '163.172.132.19 (163.172.132.19)' can't be established.
ECDSA key fingerprint is SHA256:Z4LMCnXUyuvwO16HI763r4h5+mURBd8/4u2bFPLETes.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '163.172.132.19' (ECDSA) to the list of known hosts.
 _
 ___ ___ __ _| | _____ ____ _ _ _
/ __|/ __/ _` | |/ _ \ \ /\ / / _` | | | |
\__ \ (_| (_| | | __/\ V V / (_| | |_| |
|___/\___\__,_|_|\___| \_/\_/ \__,_|\__, |
 |___/

Welcome on Ubuntu Xenial (16.04 LTS) (GNU/Linux 4.5.7-std-3 x86_64 )

System information as of: Wed Jul 13 19:46:53 UTC 2016

System load: 0.02 Int IP Address: 10.2.46.19 
Memory usage: 0.0% Pub IP Address: 163.172.132.19
Usage on /: 3% Swap usage: 0.0%
Local Users: 0 Processes: 83
Image build: 2016-05-20 System uptime: 3 min
Disk nbd0: l_ssd 50G

Documentation: https://scaleway.com/docs
Community: https://community.scaleway.com
Image source: https://github.com/scaleway/image-ubuntu


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@scw-test:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB]
...
Reading package lists... Done
Building dependency tree 
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@scw-test:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
 libpython3.5
The following packages will be upgraded:
 apt apt-utils base-files bash bash-completion bsdutils dh-python gcc-5-base
 grep init init-system-helpers libapt-inst2.0 libapt-pkg5.0 libblkid1
 libboost-iostreams1.58.0 libboost-random1.58.0 libboost-system1.58.0
 libboost-thread1.58.0 libexpat1 libfdisk1 libgnutls-openssl27 libgnutls30
 libldap-2.4-2 libmount1 libnspr4 libnss3 libnss3-nssdb libpython2.7-minimal
 libpython2.7-stdlib librados2 librbd1 libsmartcols1 libstdc++6 libsystemd0
 libudev1 libuuid1 lsb-base lsb-release mount python2.7 python2.7-minimal
 systemd systemd-sysv tzdata udev util-linux uuid-runtime vim vim-common
 vim-runtime wget
51 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 27.6 MB of archives.
After this operation, 5,069 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 base-files amd64 9.4ubuntu4.1 [68.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 bash amd64 4.3-14ubuntu1.1 [583 kB]
...
Setting up librados2 (10.2.0-0ubuntu0.16.04.2) ...
Setting up librbd1 (10.2.0-0ubuntu0.16.04.2) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@scw-test:~#

Installing ZFS as a DKMS module

There are instructions on how to install ZFS as a DKMS module at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module

First, we install the build-essential package,

root@scw-test:~# apt install build-essential

Second, we run the script that is provided at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module It takes about a minute for this script to run; it downloads the kernel source and prepares the modules for compilation.

Third, we install the zfsutils-linux package as usual. In this case, it takes more time to install, as it needs to recompile the ZFS modules.

root@scw-test:~# apt install zfsutils-linux

This step takes lots of time. Eight and a half minutes!

Installing the LXD package

The final step is to install the LXD package

root@scw-test:~# apt install lxd

Initial configuration of LXD

A VPS at Scaleway does not have access to a separate block device (the dedicated servers do). Therefore, we are creating the ZFS filesystem in a loopback device.

root@scw-test:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda 46G 2.1G 42G 5% /

We have 42GB of free space, therefore let’s allocate 36GB for the ZFS filesystem.

root@scw-test:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: mylxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 36
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...we accept the defaults in creating the LXD bridge...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@scw-test:~#

 

Create a user to manage LXD

We create a non-root user to manage LXD. It is advised to create such a user and refrain from using root for such tasks.

root@scw-test:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: *******
Retype new UNIX password: *******
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y
root@scw-test:~#

Then, let’s add this user ubuntu to the sudo (ability to run sudo) and lxd (manage LXD containers) groups,

root@scw-test:~# adduser ubuntu sudo         # For scaleway. For others, the name might be 'admin'.
root@scw-test:~# adduser ubuntu lxd

Finally, let’s restart the VPS. Although it is not necessary, it is a good practice in order to make sure that lxd starts automatically even with ZFS being compiled through DKMS. A shutdown -r now would suffice to restart the VPS. After about 20 seconds, we can ssh again, as the new user ubuntu.

Let’s start up a container

We log in as this new user ubuntu (or, sudo su – ubuntu).

ubuntu@scw-test:~$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Retrieving image: 100%
Starting mycontainer
ubuntu@scw-test:~$ lxc list
+-------------+---------+------+------+------------+-----------+
| NAME        | STATE   | IPV4 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT |         0 |
+-------------+---------+------+------+------------+-----------+
ubuntu@scw-test:~$ lxc list
+-------------+---------+----------------------+------+------------+-----------+
| NAME        | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+----------------------+------+------------+-----------+
| mycontainer | RUNNING | 10.181.132.19 (eth0) |      | PERSISTENT | 0         |
+-------------+---------+----------------------+------+------------+-----------+
ubuntu@scw-test:~$

We launched an Ubuntu 16.04 LTS (Xenial: “x”) container, and then we listed the details. It takes a few moments for the container to boot up. In the second attempt, the container completed the booting up and also got the IP address.

That’s it! LXD is up and running, and we successfully created a container. See these instructions on how to test the container with a Web server.

post image

Trying out LXD containers on Ubuntu on DigitalOcean

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it’s interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean

do-create-droplet

When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@128.199.41.205    # change with the IP address you get from the DigitalOcean panel
The authenticity of host '128.199.41.205 (128.199.41.205)' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '128.199.41.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
...
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
lxd:
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
 zfs-initramfs
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
 zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let’s allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

  • we initialized LXD with the ZFS storage backend,
  • we created a new pool and gave a name (here, lxd-pool),
  • we do not have a block device, so we get a (sparse) image file that contains the ZFS filesystem
  • we do not want now to make LXD available over the network
  • we want to configure the LXD bridge for the inter-networking of the containters

Let’s create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by 123.59.134.76 port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
Done.
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.

do-lxc-list

Create a Web server in a container

We launch (init and start) a container named c1.

do-lxd-launch

The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let’s enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
...
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@c1:~#

Let’s install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@c1:~#

Is the Web server running? Let’s check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*
root@c1:~#

The parameters mean

  • -t: Show only TCP sockets
  • -u: Show only UDP sockets
  • -l: Show listening sockets
  • -a: Show all sockets (makes no difference because of previous options; it’s just makes an easier word to remember, tula)

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)
root@c1:~#

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let’s change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html

do-lxd-nginx-page

Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at http://128.199.41.205/ we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 128.199.41.205/32 --dport 80 -j DNAT --to-destination 10.160.152.184:80

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, http://128.199.41.205/ so let’s visit it.

do-lxd-welcome-nginx

That’s it! We created an LXD container with the nginx Web server, then exposed it to the Internet.

 

post image

Trying out LXD containers on our Ubuntu

This post is about containers, a construct similar to virtual machines (VM) but so much lightweight that you can easily create a dozen on your desktop Ubuntu!

A VM virtualizes a whole computer and then you install in there the guest operating system. In contrast, a container reuses the host Linux kernel and simply contains just the root filesystem (aka runtimes) of our choice. The Linux kernel has several features that rigidly separate the running Linux container from our host computer (i.e. our desktop Ubuntu).

By themselves, Linux containers would need some manual work to manage them directly. Fortunately, there is LXD (pronounced Lex-deeh), a service that manages Linux containers for us.

We will see how to

  1. setup our Ubuntu desktop for containers,
  2. create a container,
  3. install a Web server,
  4. test it a bit, and
  5. clear everything up.

Set up your Ubuntu for containers

If you have Ubuntu 16.04, then you are ready to go. Just install a couple of extra packages that we see below. If you have Ubuntu 14.04.x or Ubuntu 15.10, see LXD 2.0: Installing and configuring LXD [2/12] for some extra steps, then come back.

Make sure the package list is up-to-date:

sudo apt update
sudo apt upgrade

Install the lxd package:

sudo apt install lxd

If you have Ubuntu 16.04, you can enable the feature to store your container files in a ZFS filesystem. The Linux kernel in Ubuntu 16.04 includes the necessary kernel modules for ZFS. For LXD to use ZFS for storage, we just need to install a package with ZFS utilities. Without ZFS, the containers would be stored as separate files on the host filesystem. With ZFS, we have features like copy-on-write which makes the tasks much faster.

Install the zfsutils-linux package (if you have Ubuntu 16.04.x):

sudo apt install zfsutils-linux

Once you installed the LXD package on the desktop Ubuntu, the package installation scripts should have added you to the lxd group. If your desktop account is a member of that group, then your account can manage containers with LXD and can avoid adding sudo in front of all commands. The way Linux works, you would need to log out from the desktop session and then log in again to activate the lxd group membership. (If you are an advanced user, you can avoid the re-login by newgrp lxd in your current shell).

Before use, LXD should be initialized with our storage choice and networking choice.

Initialize lxd for storage and networking by running the following command:

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 30
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> You will be asked about the network bridge configuration. Accept all defaults and continue.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

We created the ZFS pool as a filesystem inside a (single) file, not a block device (i.e. in a partition), thus no need for extra partitioning. In the example I specified 30GB, and this space will come from the root (/) filesystem. If you want to look at this file, it is at /var/lib/lxd/zfs.img.

 

That’s it! The initial configuration has been completed. For troubleshooting or background information, see https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/

Create your first container

All management commands with LXD are available through the lxc command. We run lxc with some parameters and that’s how we manage containers.

lxc list

to get a list of installed containers. Obviously, the list will be empty but it verifies that all are fine.

lxc image list

shows the list of (cached) images that we can use to launch a container. Obviously, the list will be empty but it verifies that all are fine.

lxc image list ubuntu:

shows the list of available remote images that we can use to download and launch as containers. This specific list shows Ubuntu images.

lxc image list images:

shows the list of available remote images for various distributions that we can use to download and launch as containers. This specific list shows all sort of distributions like Alpine, Debian, Gentoo, Opensuse and Fedora.

Let’s launch a container with Ubuntu 16.04 and call it c1:

$ lxc launch ubuntu:x c1
Creating c1
Starting c1
$ _

We used the launch action, then selected the image ubuntu:x (x is an alias for the Xenial/16.04 image) and lastly we use the name c1 for our container.

Let’s view our first installed container,

$ lxc list

+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.158 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+

Our first container c1 is running and it has an IP address (accessible locally). It is ready to be used!

Install a Web server

We can run commands in our container. The action for running commands, is exec.

$ lxc exec c1 -- uptime
 11:47:25 up 2 min, 0 users, load average: 0.07, 0.05, 0.04
$ _

After the action exec, we specify the container and finally we type command to run inside the container. The uptime is just 2 minutes, it’s a fresh container :-).

The — thing on the command line has to do with parameter processing of our shell. If our command does not have any parameters, we can safely omit the –.

$ lxc exec c1 -- df -h

This is an example that requires the –, because for our command we use the parameter -h. If you omit the –, you get an error.

Let’s get a shell in the container, and update the package list already.

$ lxc exec c1 bash
root@c1:~# apt update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
...
Hit http://archive.ubuntu.com trusty/universe Translation-en 
Fetched 11.2 MB in 9s (1228 kB/s) 
Reading package lists... Done
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up dpkg (1.17.5ubuntu5.7) ...
root@c1:~# _

We are going to install nginx as our Web server. nginx is somewhat cooler than Apache Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree
...
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
Setting up nginx (1.4.6-1ubuntu3.5) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
root@c1:~# _

Let’s view our Web server with our browser. Remeber the IP address you got 10.173.82.158, so I enter it into my browser.

lxd-nginx

Let’s make a small change in the text of that page. Back inside our container, we enter the directory with the default HTML page.

root@c1:~# cd /var/www/html/
root@c1:/var/www/html# ls -l
total 2
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
root@c1:/var/www/html#

We can edit the file with nano, then save

lxd-nginx-nano

Finally, let’s check the page again,

lxd-nginx-modified

Clearing up

Let’s clear up the container by deleting it. We can easily create new ones when we need them.

$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.169 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+
$ lxc stop c1
$ lxc delete c1
$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
+---------+---------+----------------------+------+------------+-----------+

We stopped (shutdown) the container, then we deleted it.

That’s all. There are many more ideas on what do with containers. Here are the first steps on setting up our Ubuntu desktop and trying out one such container.