Tag : lxd

post image

How to set up LXD on Packet.net (baremetal servers)

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
stepping : 8
microcode : 0x122
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat
bugs :
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common
 initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0
 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0
 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0
 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
 libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal
 libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2
 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv
 tcpdump udev unattended-upgrades
59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.3 MB of archives.
After this operation, 77.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ...

Configuration file '/etc/cloud/cloud.cfg'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
 What would you like to do about it ? Your options are:
 Y or I : install the package maintainer's version
 N or O : keep your currently-installed version
 D : show the differences between the versions
 Z : start a shell to examine the situation
 The default action is to keep your current version.
*** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N
Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4
root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd
lxd:
      Installed: (none)
      Candidate: 2.0.10-0ubuntu1~16.04.1
      Version table:
              2.0.10-0ubuntu1~16.04.1 500
                      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
              2.0.0-0ubuntu4 500
                      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd
...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y

root@lxd:~# ssh myusername@localhost
Permission denied (publickey).
root@lxd:~# cp -R ~/.ssh/ ~myusername/
root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

**************************************************************************
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on Amazon EC2 or one of cloud-init's known platforms that #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
# #
# If you are seeing this message, please file a bug against #
# cloud-init at #
# https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid #
# Make sure to include the cloud provider your instance is #
# running on. #
# #
# For more information see #
# https://bugs.launchpad.net/bugs/1660385 #
# #
# After you have filed a bug, you can disable this warning by #
# launching your instance with the cloud-config below, or #
# putting that content into #
# /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg #
# #
# #cloud-config #
# datasource: #
# Ec2: #
# strict_id: false #
**************************************************************************

Disable the warnings above by:
 touch /home/myusername/.cloud-warnings.skip
or
 touch /var/lib/cloud/instance/warnings/.skip
myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 136G 1.1G 128G 1% /
myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs 
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd 
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=27]: 100
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

LXD has been successfully configured.
myusername@lxd:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
myusername@lxd:~$

Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (47.99MB/s) 
Starting web 
myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS
+------+---------+---------------------+------------+-----------+
| NAME | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------------+-----------+
| web  | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+------------+-----------+
myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on Packet.net in an LXD container!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx on Packet.net in an LXD container!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig 
bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8 
      inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254
      inet6 addr: 2604:1380:2000:600::1/127 Scope:Global
      inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      RX packets:144216 errors:0 dropped:0 overruns:0 frame:0
      TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@lxd:~$

Let’s test it out!

That’s it!

post image

How to use Ubuntu and LXD on Alibaba Cloud

Alibaba Cloud is like Amazon Web Services as they offer quite similar cloud services. They are part of the Alibaba Group, a huge Chinese conglomerate. For example, the retailer component of the Alibaba Group is now bigger than Walmart. Here, we try out the cloud services.

The main reason to select Alibaba Cloud is to get a server running inside China. They also have several data centers outside China, but inside China it is mostly Alibaba Cloud. To get a server running inside mainland China though, you need to go through a registration process where you submit photos of your passport. We ain’t have time for that, therefore we select the closest data center to China, Hong Kong.

Creating an account on Alibaba Cloud

Click to create an account on Alibaba Cloud (update: no referral link). You get $300 credit to use within two months, and up to $50 of that credit can go towards launching virtual private servers. Actually, make that account with the referral now, before continuing with this section below..

When creating the account, there is either the option to verify your email or phone number. Let’s do the email verification.

Let’s check our mails. Where is that email from Alibaba Cloud? Nothing arrived!?!

The usability disaster is almost evident. When you get to this page about the Verification, the text says We need to verify your email. Please input the number you receive. Alibaba Cloud did not already send that email to us. We need to first click on Send to get it to send that email. The text should have said instead something like To use email verification, click Send below, then input the numbercode you have received.

You can pay Alibaba Cloud using either a bank card or Paypal. Let’s try Paypal! Actually, to make use of the $300 credit, it has to be a bank card instead.

We have added a bank card. This bank card has to go through a type verification. Alibaba Cloud will make a small debit (to be refunded later) and you can input either the transaction amount or the transaction code (see screenshot) below in order to verify that you do have access to your bank card.

After a couple of days, you get worried because there is no transaction with the description INTL*?????.ALIYUN.COM at your online banking. What went wrong? And what is this weird transaction with a different description in my bank statement?

Description: INTL*175 LUXEM LU ,44

Debit amount: 0.37€

What is LUXEM, a municipality in Germany, doing on my bank statement? Let’s hope that the processor for Alibaba in Europe is LUXEM, not ALIYUN.

Let’s try as transaction code the number 175. Did not work. Four more tries remaining.

Let’s try the transaction amount, 0.37€. Of course, it did not work. It says USD, nor EURO! Three tries remaining.

Let’s google a bit, Add a payment method documentation on Alibaba Cloud talks only about dollars. A forum post about non-dollar currencies says:

I did not get an authorization charge, therefore there is no X.

Let’s do something really crazy:

We type 0.44 as the transaction amount. IT WORKED!

In retrospect, there is a reference on ,44 in the description, who would have thought that this undocumented info might refer to the dollar amount.

After a week, the micro transaction of 0.37€ was not reimbursed. What’s more, I was also charged with a 2.5€ commission which I am not getting back either.

We are now ready to use the $300 Free Credit!

Creating a server on Alibaba Cloud

When trying to create a server, you may encounter this website, with a hostname YUNDUN.console.aliyun.com. If you get that, you are in the wrong place. You cannot add your SSH key here, nor do you create a server.

Instead, it should say ECS, Elastic Compute Service.

Here is the full menu for ECS,

Under Networks & Security, there is Key Pairs. Let’s add there the SSH public key, not the whole key pair.

First of all, we need to select the appropriate data center. Ok, we change to Hong Kong which is listed in the middle.

But how do we add our own SSH key? There is only an option to Create Key Pair!?! Well, let’s create a pair.

Ah, okay. Although the page is called Create Key Pair, we can actually Import an Existing Key Pair.

Now, click back to Elastic Computer S…/Overview, which shows each data center.

If we were to try to create a server in Mainland China, we get

In that case, we would need to send first a photo of our passport or our driver’s license.

Let’s go back, and select Hong Kong.

We are ready to configure our server.

There is an option of either a Starter Package or an Advanced Purchase. The Starter Package is really cool, you can get a server for only $4.5. But the find print for the $300 credit says that you cannot use the Starter Package here.

So, Advanced Purchase it will be.

There are two pricing models, Subscription and Pay As You Go. Subscription means that you pay monthly, Pay As You Go means that you pay hourly. We go for Subscription.

We select the 1-core, 1GB instance, and we can see the price at $12.29. We also pay separately for the Internet traffic. The cost is shown on an overlay, we still have more options to select before we create the server.

We change the default Security Group to the one shown above. We want our server to be accessible from outside on ports 80 and 443. Also port 22 is added by default, along with the port 3389 (Remote Desktop in Windows).

We select Ubuntu 16.04.  The order of the operating systems is a bit weird. Ideally, the order should reflect the popularity.

There is an option for Server Guard. Let’s try it since it is free. (it requires to install some closed-source package in our Linux. Eventually I did not try it).

The Ultra Cloud Disk is a network share and it is included in the earlier price. The other option would be to select an SSD. It is nice that we can add up to 16 disks to our server.

We are ready to place the order. It correctly shows $0 and mentions the $50 credit. We select not to auto renew.

Now we pay the $0.

And that’s how we start a server. We have received an email with the IP address but can also find the public IP address from the ECS settings.

Let’s have a look at the IP block for this IP address.

ffs.

How to set up LXD on an Alibaba server

First, we SSH to the server. The command looks like ssh root@_public_ip_address_

It looks like real Ubuntu, with real Ubuntu Linux kernel. Let’s update.

root@iZj6c66d14k19wi7139z9eZ:~# apt update
Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease [247 kB]
Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease

...
Get:45 http://mirrors.aliyun.com/ubuntu xenial-security/universe i386 Packages [147 kB] 
Get:46 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [89.8 kB] 
Fetched 40.8 MB in 24s (1682 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
105 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@iZj6c66d14k19wi7139z9eZ:~#

We upgraded (apt upgrade) and there was a kernel update. We restarted (shutdown -r now) and the newly updated Ubuntu has the updated kernel. Nice!

Let’s check /proc/cpuinfo,

root@iZj6c66d14k19wi7139z9eZ:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
stepping : 2
microcode : 0x1
cpu MHz : 2494.224
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs :
bogomips : 4988.44
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

root@iZj6c66d14k19wi7139z9eZ:/proc#

How much free space from the 40GB disk?

root@iZj6c66d14k19wi7139z9eZ:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1   40G 2,2G 36G 6% /
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s add a non-root user.

root@iZj6c66d14k19wi7139z9eZ:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] 
root@iZj6c66d14k19wi7139z9eZ:~#

Is LXD already installed?

root@iZj6c66d14k19wi7139z9eZ:~# apt policy lxd
lxd:
 Installed: (none)
 Candidate: 2.0.10-0ubuntu1~16.04.2
 Version table:
     2.0.10-0ubuntu1~16.04.2 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages
         100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial/main amd64 Packages
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s install LXD.

root@iZj6c66d14k19wi7139z9eZ:~# apt install lxd

Now, we can add our user account myusername to the groups sudo, lxd.

root@iZj6c66d14k19wi7139z9eZ:~# usermod -a -G lxd,sudo myusername
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s copy the SSH public key from root to the new non-root account.

root@iZj6c66d14k19wi7139z9eZ:~# cp -R /root/.ssh ~myusername/
root@iZj6c66d14k19wi7139z9eZ:~# chown -R myusername:myusername ~myusername/.ssh/
root@iZj6c66d14k19wi7139z9eZ:~#

Now, log out and log in as the new non-root account.

$ ssh myusername@IP.IP.IP.IP
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-96-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to Alibaba Cloud Elastic Compute Service !

myusername@iZj6c66d14k19wi7139z9eZ:~$

We are going to install the ZFS utilities so that LXD can use ZFS as a storage backend.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo apt install zfsutils-linux
...myusername@iZj6c66d14k19wi7139z9eZ:~$

Now, we can configure LXD. From before, the server had about 35GB free. We are allocating 20GB of that for LXD.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo lxd init
sudo: unable to resolve host iZj6c66d14k19wi7139z9eZ
[sudo] password for myusername:  ********
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=15]: 20
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket

LXD has been successfully configured.
myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
Generating a client certificate. This may take a minute…
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+——+——-+——+——+——+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+——-+——+——+——+———–+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Okay, we can create now our first LXD container. We are creating just a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (6.70MB/s) 
Starting web 
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s see the container,

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web | RUNNING | 10.35.87.141 (eth0) | | PERSISTENT | 0 |
+------+---------+---------------------+------+------------+-----------+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Nice. We get into the container and install a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc exec web -- sudo --login --user ubuntu

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We executed into the web container the command sudo –login –user ubuntu. The container has a default non-root account ubuntu.

ubuntu@web:~$ sudo apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease 
...
Reading state information... Done
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html 
ubuntu@web:~$ logout
myusername@iZj6c66d14k19wi7139z9eZ:~$ curl 10.35.87.141
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx running in an LXD container on Alibaba Cloud!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx running in an LXD container on Alibaba Cloud!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@iZj6c66d14k19wi7139z9eZ:~$

Obviously, the web server in the container is not accessible from the Internet. We need to do something like add iptables rules to forward appropriately the connection.

Alibaba Cloud gives two IP address per server. One is the public IP address and the other is a private IP address (172.[16-31].*.*). The eth0 interface of the server has that private IP address. This information is important for iptables below.

myusername@iZj6c66d14k19wi7139z9eZ:~$ PORT=80 PUBLIC_IP=my172.IPAddress CONTAINER_IP=10.35.87.141 sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s load up our site using the public IP address from our own computer:

And that’s it!

Conclusion

Alibaba Cloud is yet another provider for cloud services. They are big in China, actually the biggest in China. They are trying to expand to the rest of the world. There are several teething problems, probably arising from the fact that the main website is in Mandarin and there is no infrastructure for immediate translation to English.

On HN there has been a sort of relaunch a few of months ago. It appears there is interest from them to get international users. What they need is people to attend immediately to issues as they are discovered.

If you want to learn more about LXD, see https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

 

Update #1

After a day of running a VPS on Alibaba Cloud, I received this email.

From: Alibaba Cloud
Subject: 【Immediate Attention Needed】Alibaba Cloud Fraud Prevention

We have detected a security risk with the card you are using to make purchases. In order to protect your account, please provide your account ID and the following information within one working day via your registered Alibaba Cloud email to compliance_support@aliyun.com for further investigation. 

If you are using a credit card as your payment method, please provide the following information directly. Please provide clear copies of: 

1. Any ONE of the following three forms of government-issued photo identification for the credit card holder or payment account holder of this Alibaba Cloud account: (i) National identification card; (ii) Passport; (iii) Driver's License. 
2. A clear copy of the front side of your credit card in connection with this Alibaba Account; (Note: For security reasons, we advise you to conceal the middle digits of your card number. Please make sure that the card holder's name, card issuing bank and the last four digits of the card number are clearly visible). 
3. A clear copy of your card's bank statement. We will process your case within 3 working days of receiving the information listed above. NOTE: Please do not provide information in this ticket. All the information needed should be sent to this email compliance_support@aliyun.com.

If you fail to provide all the above information within one working day , your instances will be shut down. 

Best regards, 

Alibaba Cloud Customer Service Center

What this means, is that update #2 has to happen now.

 

Update #2

Newer versions of LXD have a utility called lxd-benchmark. This utility spawns, starts and stops containers, and can be used to have an idea how efficient a server may be. I suppose primarily it is used to figure out if there is a regression in the LXD code. Let see it anyway in action here, the clock is ticking.

The new LXD is in a PPA at https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Let’s install it on Alibaba Cloud.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
sudo apt updatesudo apt upgrade                   # Now LXD will be upgraded.sudo apt install lxd-tools         # Now lxd-benchmark will be installed.

Let’s see the options for lxd-benchmark.

Usage: lxd-benchmark spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT]
 lxd-benchmark start [--parallel=COUNT]
 lxd-benchmark stop [--parallel=COUNT]
 lxd-benchmark delete [--parallel=COUNT]

--count (= 100)
 Number of containers to create
 --freeze (= false)
 Freeze the container right after start
 --image (= "ubuntu:")
 Image to use for the test
 --parallel (= -1)
 Number of threads to use
 --privileged (= false)
 Use privileged containers
 --report-file (= "")
 A CSV file to write test file to. If the file is present, it will be appended to.
 --report-label (= "")
 A label for the report entry. By default, the action is used.
 --start (= true)
 Start the container after creation

First, we need to spawn new containers that we can later start, stop or delete. Ideally, I would expect the terminology to be launch instead of spawn, to keep in sync with the existing container management commands.

Second, there are defaults for each command as shown above. There is no indication yet as to how much RAM you need to spawn the default 100 containers. Obviously it would be more than the 1GB RAM we have on this server. Regarding the disk space, that would be fine because of copy-on-write with ZFS; any newly created LXD container does not use up additional space as they all are based on the space of the first container. Perhaps after a day when unattended-upgrades kicks in, each container would use up some space for any required security updates that get automatically applied.

Let’s try out with 3 containers. We have stopped and deleted the original web container that we have created in this tutorial (lxc stop web ; lxc delete web).

$ lxd-benchmark spawn --count 3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 3
 Batch size: 1
 Remainder: 0

[Sep 27 17:31:41.074] Importing image into local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Batch processing start
[Sep 27 17:32:37.614] Processed 1 containers in 24.790s (0.040/s)
[Sep 27 17:32:42.611] Processed 2 containers in 29.786s (0.067/s)
[Sep 27 17:32:49.110] Batch processing completed in 36.285s
$ lxc list --columns ns4tS
+-------------+---------+---------------------+------------+-----------+
| NAME        | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+-------------+---------+---------------------+------------+-----------+
| benchmark-1 | RUNNING | 10.35.87.252 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-2 | RUNNING | 10.35.87.115 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-3 | RUNNING | 10.35.87.72 (eth0)  | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| web         | RUNNING | 10.35.87.141 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
$

We created three extra containers, named benchmark-?, and got them started. There were launched in three batches, which means that one was started after another, not in parallel.

The total time on this server, when the storage backend is zfs, was 36.2 seconds. It is not clear what the numbers in the parenthesis mean at Processed 1 containers in 18.770s (0.053/s).

The total time on this server, when the storage backend was dir, was 68.6 instead.

Let’s stop them!

$ lxd-benchmark stop
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:06:08.822] Stopping 3 containers
[Sep 27 18:06:08.822] Batch processing start
[Sep 27 18:06:09.680] Processed 1 containers in 0.858s (1.165/s)
[Sep 27 18:06:10.543] Processed 2 containers in 1.722s (1.162/s)
[Sep 27 18:06:11.406] Batch processing completed in 2.584s
$

With dir, it was around 2.4 seconds.

And then delete them!

$ lxd-benchmark delete
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:07:12.020] Deleting 3 containers
[Sep 27 18:07:12.020] Batch processing start
[Sep 27 18:07:12.130] Processed 1 containers in 0.110s (9.116/s)
[Sep 27 18:07:12.224] Processed 2 containers in 0.204s (9.814/s)
[Sep 27 18:07:12.317] Batch processing completed in 0.297s
$

With dir, it was 2.5 seconds.

Let’s create three containers in parallel.

$ lxd-benchmark spawn --count=3 --parallel=3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 3
 Remainder: 0

[Sep 27 18:11:01.570] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:11:01.570] Batch processing start
[Sep 27 18:11:11.574] Processed 3 containers in 10.004s (0.300/s)
[Sep 27 18:11:11.574] Batch processing completed in 10.004s
$

With dir, it was 58.7 seconds.

Let’s push it further and try to hit some memory limits! First, we delete all, and launch 5 in parallel.

$ lxd-benchmark spawn --count=5 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 5
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 5
 Remainder: 0

[Sep 27 18:13:11.171] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:13:11.172] Batch processing start
[Sep 27 18:13:33.461] Processed 5 containers in 22.290s (0.224/s)
[Sep 27 18:13:33.461] Batch processing completed in 22.290s
$

So, 5 containers can start in 1GB of RAM, in just 22 seconds.

We also tried the same with the dir storage backend, and got

[Sep 27 17:24:16.409] Batch processing start
[Sep 27 17:24:54.508] Failed to spawn container 'benchmark-5': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-5/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:11.129] Failed to spawn container 'benchmark-3': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-3/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:35.906] Processed 5 containers in 79.496s (0.063/s)
[Sep 27 17:25:35.906] Batch processing completed in 79.496s

Out of the five containers, it managed to create 3 (No 1, 3, 4). The reason is that unsquashfs needs to run to expand an image, and that process uses a lot of memory. When using zfs, the same process probably does not need that much memory.

Let’s delete the five containers (storage backend: zfs):

[Sep 27 18:18:37.432] Batch processing completed in 5.006s

Let’s close the post with

$ lxd-benchmark spawn --count=10 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 10
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 2
 Batch size: 5
 Remainder: 0

[Sep 27 18:19:44.706] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:19:44.706] Batch processing start
[Sep 27 18:20:07.705] Processed 5 containers in 22.998s (0.217/s)
[Sep 27 18:20:57.114] Processed 10 containers in 72.408s (0.138/s)
[Sep 27 18:20:57.114] Batch processing completed in 72.408s

We launched 10 containers in two batches of five containers each. The lxd-benchmark command completed successfully, in just 72 seconds. However, after the command completed, each container would start up, get an IP and get working. We hit the memory limit when the second batch of five containers where starting up. The network monitor on the Alibaba Cloud management console shows 100% CPU utilization, and it is not possible to access the server over SSH. Let’s delete the server from the management console and wind down this trial of Alibaba Cloud.

lxd-benchmark is quite useful and can be used to get practical understanding as to how many containers can make it on a server and much more.

Update #3

I just restarted the server from the management console and connected using SSH.

Here are the ten containers from Update #2,

$ lxc list --columns ns4
+--------------+---------+------+
| NAME         | STATE   | IPV4 |
+--------------+---------+------+
| benchmark-01 | STOPPED |      |
+--------------+---------+------+
| benchmark-02 | STOPPED |      |
+--------------+---------+------+
| benchmark-03 | STOPPED |      |
+--------------+---------+------+
| benchmark-04 | STOPPED |      |
+--------------+---------+------+
| benchmark-05 | STOPPED |      |
+--------------+---------+------+
| benchmark-06 | STOPPED |      |
+--------------+---------+------+
| benchmark-07 | STOPPED |      |
+--------------+---------+------+
| benchmark-08 | STOPPED |      |
+--------------+---------+------+
| benchmark-09 | STOPPED |      |
+--------------+---------+------+
| benchmark-10 | STOPPED |      |
+--------------+---------+------+

The containers are in the stopped state. That is, they do not consume memory. How much free memory is there?

$ free
       total  used   free shared buff/cache available
Mem: 1016020 56192 791752 2928 168076 805428
Swap:      0     0      0

About 792MB free memory.

There is not enough memory to get them all to run at the same time. It is good that they get into the stopped state when you reboot, so that you can fix.

post image

Πως χρησιμοποιούμε περιέκτες LXD (LXD containers) στο Ubuntu και άλλες διανομές

Ξέρουμε για τις εικονικές μηχανές (virtual machines) όπως Virtualbox και VMWare, υπάρχουν όμως και οι περιέκτες (containers) όπως Docker και LXD (προφέρεται λεξ ντι).

Εδώ θα δούμε για τους περιέκτες LXD (LXD containers), με την υποστήριξη να είναι ήδη διαθέσιμη σε όσους έχουν Ubuntu 16.04 ή νεότερο. Για τις υπόλοιπες διανομές χρειάζεται εγκατάσταση του πακέτου LXD.

Συγκεκριμένα, σήμερα θα δούμε:

  1. Τι είναι και τι προσφέρει το LXD;
  2. Πως κάνουμε την αρχική ρύθμιση του LXD σε Ubuntu Desktop (ή Ubuntu Server);
  3. Πως δημιουργούμε τον πρώτο μας περιέκτη;
  4. Πως εγκαθιστούμε τον nginx μέσα σε περιέκτη;
  5. Ποιες είναι κάποιες από τις άλλες πρακτικές χρήσεις των περιεκτών LXD;

Για τα παρακάτω, θεωρούμε ότι έχουμε Ubuntu 16.04 ή νεότερο. Ubuntu Desktop ή Ubuntu Server είναι μια χαρά.

Τι είναι και τι προσφέρει το LXD;

Υπάρχει ο όρος περιέκτες Linux (Linux containers, LXC) που περιγράφει τη νέα δυνατότητα που προσφέρει ο πυρήνας του Linux να περιορίζεται η εκτέλεση μιας θυγατρικής διεργασίας (μέσω namespaces, cgroups) ώστε να επιτρέπεται να γίνονται μόνο όσα έχουμε δηλώσει. Με το Docker, μπορούμε να τρέξουμε (τυπικά) μια διεργασία κάτω από περιορισμούς (process container). Με το LXD όμως, μπορούμε να τρέξουμε μια ολόκληρη διανομή κάτω από περιορισμούς (machine container).

Το LXD είναι λογισμικό επόπτη (hypervisor) που επιτρέπει τον πλήρη έλεγχο του κύκλου ζωής των περιεκτών. Συγκεκριμένα,

  • επιτρέπει την αρχικοποίηση των ρυθμίσεων καθώς και του χώρου όπου θα αποθηκεύονται οι περιέκτες. Μετά την αρχικοποίηση, δεν χρειάζεται να ασχοληθούμε ξανά με αυτές τις λεπτομέρειες.
  • παρέχει αποθετήρια με έτοιμες εικόνες (images) από μια σειρά διανομών. Υπάρχει Ubuntu (από 12.04 έως 17.04, Ubuntu Core), Alpine, Debian (strech, wheezy), Fedora (22, 23, 24, 25), Gentoo, OpenSUSE, Oracle, Plamo και Sabayon. Αυτά είναι διαθέσιμα στις αρχιτεκτονικές amd64, i386, armhf, armel, powerpc, ppc64el και s390x.
  • επιτρέπει την εκκίνηση μιας εικόνας μέσα σε λίγα δευτερόλεπτα. Μια εικόνα που έχει εκκινηθεί, αποτελεί έναν περιέκτη.
  • μπορούμε να πάρουμε αντίγραφο ασφάλειας ενός περιέκτη, να τον μεταφέρουμε μέσω δικτύου σε άλλη εγκατάσταση LXD, κτλ.

Η τυπική χρήση των περιεκτών LXD είναι στο να τρέχουμε υπηρεσίες διαδικτύου όπως WordPress, με στόχο να έχουμε σε ξεχωριστό περιέκτη κάθε διαφορετικό δικτυακό τόπο. Έτσι, απομονώνουμε τις υπηρεσίες και μπορούμε να τις διαχειριστούμε καλύτερα. Σε σχέση με τις εικονικές μηχανές, οι περιέκτες LXD απαιτούν πολύ λιγότερους πόρους. Για παράδειγμα, σε υπολογιστή με Ubuntu Desktop και 4GB RAM, μπορούμε να τρέξουμε άνετα και δέκα περιέκτες LXD.

Αρχικές ρυθμίσεις του LXD

Τώρα θα ρυθμίσουμε το LXD στον υπολογιστή μας. Αν για κάποιο λόγο δεν θέλετε να το κάνετε, μπορείτε να δοκιμάσετε το LXD και μέσω διαδικτύου από τη δωρεάν υπηρεσία δοκιμής του LXD.

Θα εκτελέσουμε την εντολή lxd init ως διαχειριστές ώστε να γίνει η αρχική ρύθμιση του LXD.

$ sudo lxd init
Name of the storage backend to use (dir or zfs): dir
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> Θα ρωτήσει για ρυθμίσεις δικτύου. Αποδεχόμαστε ό,τι προταθεί και συνεχίζουμε.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

Μας ρώτησε για το storage backend (υποστήριξη αποθήκευσης) και επιλέξαμε dir. Αυτή είναι η πιο απλή επιλογή, και τα αρχεία κάθε περιέκτη θα τοποθετηθούν σε υποκατάλογο στο /var/lib/lxd/. Για πιο σοβαρή χρήση, θα επιλέγαμε zfs. Αν θέλετε να δοκιμάσετε με zfs, αποδεχτείτε ό,τι προταθεί και επιλέξτε να διαθέσετε τουλάχιστον 15GB χώρο.

Με τη ρύθμιση της γέφυρας LXD (LXD bridge), γίνεται η ρύθμιση του διαδικτύου για τους περιέκτες. Αυτό που θα γίνει, είναι ότι το LXD θα παρέχει έναν εξυπηρετητή DHCP (τον dnsmasq) για τους περιέκτες ώστε να τους αποδώσει διεθύνσεις IP τύπου 10.x.x.x και να επιτρέψει την πρόσβαση στο διαδίκτυο.

Τώρα είμαστε σχεδόν έτοιμοι να τρέξουμε εντολές lxc για τη διαχείρηση εικόνων και περιεκτών LXD. Ας επιβεβαιώσουμε ότι ο λογαριασμός χρήση μπορεί να τρέξει εντολές για το LXD. Εδώ χρειάζεται ο χρήστης μας να ανήκει την ομάδα (group) με όνομα lxd. Δηλαδή,

$ groups myusername
myusername : myusername adm cdrom sudo vboxusers lxd

Αν δεν είμασταν μέλη της ομάδας lxd, τότε θα χρειαζόταν να τρέξουμε

$ sudo usermod --append --groups lxd myusername
$ _

και μετά να αποσυνδεθούμε (log out) και να συνδεθούμε (log in) ξανά.

Πως δημιουργούμε έναν περιέκτη

Πρώτα ας τρέξουμε την εντολή που δείχνει τι περιέκτες υπάρχουν. Θα δείξει κενό.

$ lxc list
If this is your first time using LXD, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:16.04
+---------+---------+-----------------+-------------------+------------+-----------+
|  NAME   |  STATE  |      IPV4       |       IPV6        |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------+-------------------+------------+-----------+
+---------+---------+-----------------+-------------------+------------+-----------+

Όλες οι εντολές διαχείρισης περιεκτών LXD ξεκινούν με lxc και μετά ακολουθεί ένα ρήμα. Το lxc list (ρήμα: list) δείχνει τους διαθέσιμους περιέκτες.

Βλέπουμε ήδη ότι το ρήμα για να ξεκινήσουμε τον πρώτο περιέκτη, είναι το launch. Μετά ακολουθεί το όνομα του αποθετηρίου, το ubuntu:, και τέλος το αναγνωριστικό της εικόνας (16.04).

Υπάρχουν δύο διαθέσιμα αποθετήρια με εικόνες, το ubuntu: και το images:. Για να δούμε τις διαθέσιμες εικόνες στο ubuntu:, τρέχουμε

$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| x (9 more)         | 8fa08537ae51 | yes    | ubuntu 16.04 LTS amd64 (release) (20170516)     | x86_64  | 153.70MB | May 16, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...$ _

Βλέπουμε ότι το Ubuntu 16.04 έχει αρκετά ψευδώνυμα, και εκτός από το «16.04», υπάρχει και το «x» (από το xenial).

Ας χρησιμοποιήσουμε την εικόνα του Ubuntu 16.04 (ubuntu:x) για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη.

$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Starting mycontainer
$ _

Εδώ χρησιμοποιήσαμε ως μήτρα την εικόνα ubuntu:x για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη με όνομα mycontainer, που τρέχει Ubuntu 16.04.

$ lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| mycontainer | RUNNING | 10.0.180.12 (eth0)  | fd42:accb:3958:4ca6:216:57ff:f0ff:1afa (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
$ _

Και αυτός είναι ο πρώτος μας περιέκτης! Είναι σε εκτέλεση, και έχει και διευθύνση IP. Ας δοκιμάσουμε:

$ ping 10.0.180.12
PING 10.0.180.12 (10.0.180.12) 56(84) bytes of data.
64 bytes from 10.0.180.12: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 10.0.180.12: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 10.0.180.12: icmp_seq=3 ttl=64 time=0.035 ms
^C
--- 10.0.180.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.035/0.035/0.036/0.004 ms
$ _

Ας εκτελέσουμε μια εντολή μέσα στον περιέκτη!

$ lxc exec mycontainer -- uname -a
Linux mycontainer 4.8.0-53-generic #56~16.04.1-Ubuntu SMP Tue May 16 01:18:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ _

Εδώ χρησιμοποιήσαμε το ρήμα exec, που δέχεται ως παράμετρο το όνομα του περιέκτη, και μετά την εντολή που θα τρέξει μέσα στον περιέκτη. Αυτό το αποτελεί αναγνωριστικό για το φλοιό μας ώστε να σταματήσει να ψάχνει για παραμέτρους. Αν δεν βάζαμε το –, τότε ο φλοιός bash θα θεωρούσε ότι το -a είναι μια παράμετρος για την εντολή lxc και θα υπήρχε πρόβλημα.

Βλέπουμε ότι οι περιέκτες τρέχουν τον πυρήνα του συστήματός μας. Όταν εκκινούμε έναν περιέκτη, ξεκινά η εκτέλεση του λογισμικού χρήστη (user-space) μιας εικόνας διανομής. Δεν ξεκινά η εκτέλεση ενός νέου πυρήνα, με αποτέλεσμα να μοιράζονται όλοι οι περιέκτες τον ίδιο πυρήνα, ακόμα και αν ανήκουν σε διαφορετικές διανομές.

Ας δημιουργήσουμε έναν φλοιό στον περιέκτη ώστε να τρέξουμε περισσότερες εντολές!

$ lxc exec mycontainer -- /bin/bash
root@mycontainer:~# exit
$

Αυτό ήταν! Μπορούμε να τρέξουμε ό,τι θέλουμε στον περιέκτη ως διαχειριστές. Αν σβήσουμε κάτι, τότε αυτό θα σβηστεί μέσα στον περιέκτη και δεν επηρεάζει το σύστημά μας.

Οι εικόνες Ubuntu έρχονται με ένα απλό λογαριασμό με όνομα ubuntu, οπότε μπορούμε να συνδεόμαστε και με το λογαριασμό αυτό. Να πως,

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ exit
$

Αυτό που κάναμε, ήταν να τρέξουμε την εντολή sudo ώστε να γίνουμε χρήστης ubuntu, και να λάβουμε έναν φλοιό εισόδου (login).

Πως εγκαθιστούμε μια υπηρεσία δικτύου σε ένα περιέκτη LXD

Ας εγκαταστήσουμε έναν εξυπηρετητή Web στον περιέκτη.

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ sudo apt update
...ubuntu@mycontainer:~$ sudo apt install nginx
...ubuntu@mycontainer:~$ sudo lsof -i
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  231     root    6u  IPv4 141913      0t0  UDP *:bootpc 
sshd      323     root    3u  IPv4 142683      0t0  TCP *:ssh (LISTEN)
sshd      323     root    4u  IPv6 142692      0t0  TCP *:ssh (LISTEN)
nginx    1183     root    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1183     root    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
ubuntu@mycontainer:~$

Ενημερώσαμε τη λίστα πακέτων μέσα στον περιέκτη και εγκαταστήσαμε το πακέτο nginx (άλλη επιλογή: apache2). Μετά τρέξαμε την εντολή lsof -i ώστε να επιβεβαιώσουμε ότι η υπηρεσία είναι σε λειτουργία.

Βλέπουμε ότι τρέχει από προεπιλογή ο sshd. Ωστόσο χρειάζεται να βάλουμε οι ίδιοι τιμές στο ~/.ssh/authorized_keys ώστε να μπορέσουμε να συνδεθούμε. Οι λογαριασμοί root και ubuntu είναι κλειδωμένοι από προεπιλογή.

Βλέπουμε ακόμα ότι είναι σε λειτουργία και ο εξυπηρετητής Web nginx.

Και πράγματι, είναι προσβάσιμος από τον περιηγητή μας.

Εδώ κάνουμε ό,τι άλλους πειραματισμούς θέλουμε. Για πληρότητα, ας δούμε πως σταματάμε τον περιέκτη και τον σβήνουμε.

ubuntu@mycontainer:~$ exit
logout
$ lxc stop mycontainer
$ lxc delete mycontainer
$

Αυτό ήταν! Σταματήσαμε τον περιέκτη mycontainer και έπειτα τον σβήσαμε.

Πρακτικές χρήσεις περιεκτών LXD

Ας δούμε μερικές πρακτικές χρήσεις περιεκτών LXD,

  1. Θέλουμε στο φορητό μας να εγκαταστήσουμε μια υπηρεσία δικτύου αλλά δεν θέλουμε μετά να ξεμείνουν τα εγκατεστημένα πακέτα. Εγκαθιστούμε σε περιέκτη και μετά τον σταματάμε (ή σβήνουμε)
  2. Θέλουμε να δοκιμάσουμε μια παλιά εφαρμογή PHP που για κάποιο λόγο δεν τρέχει σε PHP7 (Ubuntu 16.04). Εγκαθιστούμε σε έναν περιέκτη το Ubuntu 14.04 («ubuntu:t»), οπότε θα έχει PHP 5.x.
  3. Θέλουμε να εγκαταστήσουμε μια εφαρμογή στο Wine αλλά ΔΕΝ ΘΕΛΟΥΜΕ να εγκατασταθούν όλα αυτά τα πακέτα που φέρνει το Wine. Εγκαθιστούμε το Wine σε περιέκτη LXD.
  4. Θέλουμε να εγκαταστήσουμε μια εφαρμογή γραφικού περιβάλλοντος με χρήση επιτάχυνσης υλικού για γραφικά, αλλά να μην μπλέξει με το σύστημά μας. Εγκαθιστούμε την εφαρμογή γραφικού περιβάλλοντος σε περιέκτη LXD.
  5. Έχουμε δύο λογαριασμούς Steam. Πως; Εγκαθιστούμε το Steam δύο φορές σε δύο περιέκτες.
  6. Θέλουμε να φιλοξενήσουμε πολλούς διαδικτυακούς τόπους στο VPS μας, και θέλουμε να υπάρχει διαχωρισμός μεταξύ τους. Εγκαθιστούμε κάθε δικτυακό τόπο σε ξεχωριστό περιέκτη.

Αν έχετε απορίες ή θέλετε υποστήριξη, ρωτήστε εδώ ή στις άλλες υπηρεσίες της κοινότητας Ubuntu Greece.

post image

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

In How to run Wine (graphics-accelerated) in an LXD container on Ubuntu we had a quick look into how to run GUI programs in an LXD (Lex-Dee) container, and have the output appear on the local X11 server (your Ubuntu desktop).

In this post, we are going to see how to

  1. generalize the instructions in order to run most GUI apps in a LXD container but appear on your desktop
  2. have accelerated graphics support and audio
  3. test with Firefox, Chromium and Chrome
  4. create shortcuts to easily launch those apps

The benefits in running GUI apps in a LXD container are

  • clear separation of the installation data and settings, from what we have on our desktop
  • ability to create a snapshot of this container, save, rollback, delete, recreate; all these in a few seconds or less
  • does not mess up your installed package list (for example, all those i386 packages for Wine, Google Earth)
  • ability to create an image of such a perfect container, publish, and have others launch in a few clicks

What we are doing today is similar to having a Virtualbox/VMWare VM and running a Linux distribution in it. Let’s compare,

  • It is similar to the Virtualbox Seamless Mode or the VMWare Unity mode
  • A VM virtualizes a whole machine and has to do a lot of work in order to provide somewhat good graphics acceleration
  • With a container, we directly reuse the graphics card and get graphics acceleration
  • The specific set up we show today, can potential allow a container app to interact with the desktop apps (TODO: show desktop isolation in future post)

Browsers have started having containers and specifically in-browser containers. It shows a trend towards containers in general, it is browser-specific and is dictated by usability (passwords, form and search data are shared between the containers).

In the following, our desktop computer will called the host, and the LXD container as the container.

Setting up LXD

LXD is supported in Ubuntu and derivatives, as well as other distributions. When you initially set up LXD, you select where to store the containers. See LXD 2.0: Installing and configuring LXD [2/12] about your options. Ideally, if you select to pre-allocate disk space or use a partition, select at least 15GB but preferably more.

If you plan to play games, increase the space by the size of that game. For best results, select ZFS as the storage backend, and place the space on an SSD disk. Also Trying out LXD containers on our Ubuntu may help.

Creating the LXD container

Let’s create the new container for LXD. We are going to call it guiapps, and install Ubuntu 16.04 in it. There are options for other Ubuntu versions, and even other distributions.

$ lxc launch ubuntu:x guiapps
Creating guiapps
Starting guiapps
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| guiapps       | RUNNING | 10.0.185.204(eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called guiapps.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. The second is glxinfo, that shows details about graphics acceleration. The third, glxgears, a minimal graphics-accelerated application. The fourth is speaker-test, to test for audio. We will know that our set up works, if all three xclock, glxinfo, glxgears and speaker-test work in the container!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ sudo apt update
ubuntu@guiapps:~$ sudo apt install x11-apps
ubuntu@guiapps:~$ sudo apt install mesa-utils
ubuntu@guiapps:~$ sudo apt install alsa-utils
ubuntu@guiapps:~$ exit $

We execute a login shell in the guiapps container as user ubuntu, the default non-root user account in all Ubuntu LXD images. Other distribution images probably have another default non-root user account.

Then, we run apt update in order to update the package list and be able to install the subsequent three packages that provide xclock, glxinfo and glxgears, and speaker-test (or aplay). Finally, we exit the container.

Mapping the user ID of the host to the container (PREREQUISITE)

In the following steps we will be sharing files from the host (our desktop) to the container. There is the issue of what user ID will appear in the container for those shared files.

First, we run on the host (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our guiapps LXD container, and restart the container for the change to take effect.

$ lxc config set guiapps raw.idmap "both $UID 1000"
$ lxc restart guiapps
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Configuring graphics and graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

Let’s attempt to run xclock in the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ xclock
Error: Can't open display: 
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock
Error: Can't open display: :0
ubuntu@guiapps:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add guiapps X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add guiapps Xauthority disk path=${XAUTHORITY} source=/home/${USER}/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container. For the host, this file can be found in the variable $XAUTHORITY and should be either at ~/.Xauthority or /run/myusername/1000/gdm/Xauthority. Obviously, we can set correctly the path= part, however the distribution in the container needs to be able to find the .Xauthority in the given location.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add guiapps mygpu gpu
$ lxc config device set guiapps mygpu uid 1000
$ lxc config device set guiapps mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). In addition to gpu device, we also set the permissions accordingly so that the device is fully accessible in  the container. The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running). Note that for Intel GPUs (my case), you may not need to add this device.

Let’s see what we got now.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock

ubuntu@guiapps:~$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Intel Open Source Technology Center (0x8086)
...
ubuntu@guiapps:~$ glxgears 

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
345 frames in 5.0 seconds = 68.783 FPS
309 frames in 5.0 seconds = 61.699 FPS
300 frames in 5.0 seconds = 60.000 FPS
^C
ubuntu@guiapps:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@guiapps:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Configuring audio

The audio server in Ubuntu desktop is Pulseaudio, and Pulseaudio has a feature to allow authenticated access over the network. Just like the X11 server and what we did earlier. Let’s do this.

We install the paprefs (PulseAudio Preferences) package on the host.

$ sudo apt install paprefs
...
$ paprefs

This is the only option we need to enable (by default all other options are not check and can remain unchecked).

That is, under the Network Server tab, we tick Enable network access to local sound devices.

Then, just like with the X11 configuration, we need to deal with two things; the access to the Pulseaudio server of the host (either through a Unix socket or an IP address), and some way to get authorization to access the Pulseaudio server. Regarding the Unix socket of the Pulseaudio server, it is a bit of hit and miss (could not figure out how to use reliably), so we are going to use the IP address of the host (lxdbr0 interface).

First, the IP address of the host (that has Pulseaudio) is the IP of the lxdbr0 interface, or the default gateway (ip link show). Second, the authorization is provided through the cookie in the host at /home/${USER}/.config/pulse/cookie Let’s connect these to files inside the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ echo export PULSE_SERVER="tcp:`ip route show 0/0 | awk '{print $3}'`" >> ~/.profile

This command will automatically set the variable PULSE_SERVER to a value like tcp:10.0.185.1, which is the IP address of the host, for the lxdbr0 interface. The next time we log in to the container, PULSE_SERVER will be configured properly.

ubuntu@guiapps:~$ mkdir -p ~/.config/pulse/
ubuntu@guiapps:~$ echo export PULSE_COOKIE=/home/ubuntu/.config/pulse/cookie >> ~/.profile
ubuntu@guiapps:~$ exit
$ lxc config device add guiapps PACookie disk path=/home/ubuntu/.config/pulse/cookie source=/home/${USER}/.config/pulse/cookie

Now, this is a tough cookie. By default, the Pulseaudio cookie is found at ~/.config/pulse/cookie. The directory tree ~/.config/pulse/ does not exist, and if we do not create it ourselves, then lxd config will autocreate it with the wrong ownership. So, we create it (mkdir -p), then add the correct PULSE_COOKIE line in the configuration file ~/.profile. Finally, we exit from the container and mount-bind the cookie from the host to the container. When we log in to the container again, the cookie variable will be correctly set!

Let’s test the audio!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@pulseaudio:~$ speaker-test -c6 -twav

speaker-test 1.1.0

Playback device is default
Stream parameters are 48000Hz, S16_LE, 6 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 32 to 349525
Period size range from 10 to 116509
Using max buffer size 349524
Periods = 4
was set period_size = 87381
was set buffer_size = 349524
 0 - Front Left
 4 - Center
 1 - Front Right
 3 - Rear Right
 2 - Rear Left
 5 - LFE
Time per period = 8.687798 ^C
ubuntu@pulseaudio:~$

If you do not have 6-channel audio output, you will hear audio on some of the channels only.

Let’s also test with an MP3 file, like that one from https://archive.org/details/testmp3testfile

ubuntu@pulseaudio:~$ sudo apt install mpg123
...
ubuntu@pulseaudio:~$ wget https://archive.org/download/testmp3testfile/mpthreetest.mp3
...
ubuntu@pulseaudio:~$ mplayer mpthreetest.mp3 
MPlayer 1.2.1 (Debian), built with gcc-5.3.1 (C) 2000-2016 MPlayer Team
...
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A:   3.7 (03.7) of 12.0 (12.0)  0.2% 

Exiting... (Quit)
ubuntu@pulseaudio:~$

All nice and loud!

Troubleshooting sound issues

AO: [pulse] Init failed: Connection refused

An application tries to connect to a PulseAudio server, but no PulseAudio server is found (either none autodetected, or the one we specified is not really there).

AO: [pulse] Init failed: Access denied

We specified a PulseAudio server, but we do not have access to connect to it. We need a valid cookie.

AO: [pulse] Init failed: Protocol error

You were trying as well to make the Unix socket work, but something was wrong. If you can make it work, write a comment below.

Testing with Firefox

Let’s test with Firefox!

ubuntu@guiapps:~$ sudo apt install firefox
...
ubuntu@guiapps:~$ firefox 
Gtk-Message: Failed to load module "canberra-gtk-module"

We get a message that the GTK+ module is missing. Let’s close Firefox, install the module and start Firefox again.

ubuntu@guiapps:~$ sudo apt-get install libcanberra-gtk3-module
ubuntu@guiapps:~$ firefox

Here we are playing a Youtube music video at 1080p. It works as expected. The Firefox session is separated from the host’s Firefox.

Note that the theming is not exactly what you get with Ubuntu. This is due to the container being so lightweight that it does not have any theming support.

The screenshot may look a bit grainy; this is due to some plugin I use in WordPress that does too much compression.

You may notice that no menubar is showing. Just like with Windows, simply press the Alt key for a second, and the menu bar will appear.

Testing with Chromium

Let’s test with Chromium!

ubuntu@guiapps:~$ sudo apt install chromium-browser
ubuntu@guiapps:~$ chromium-browser
Gtk-Message: Failed to load module "canberra-gtk-module"

So, chromium-browser also needs a libcanberra package, and it’s the GTK+ 2 package.

ubuntu@guiapps:~$ sudo apt install libcanberra-gtk-module
ubuntu@guiapps:~$ chromium-browser

There is no menubar and there is no easy way to get to it. The menu on the top-right is available though.

Testing with Chrome

Let’s download Chrome, install it and launch it.

ubuntu@guiapps:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
...
ubuntu@guiapps:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb
...
Errors were encountered while processing:
 google-chrome-stable
ubuntu@guiapps:~$ sudo apt install -f
...
ubuntu@guiapps:~$ google-chrome
[11180:11945:0503/222317.923975:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[11180:11945:0503/222317.924441:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
^C
ubuntu@guiapps:~$ sudo apt install upower
ubuntu@guiapps:~$ google-chrome

There are these two errors regarding UPower and they go away when we install the upower package.

Creating shortcuts to the container apps

If we want to run Firefox from the container, we can simply run

$ lxc exec guiapps -- sudo --login --user ubuntu firefox

and that’s it.

To make a shortcut, we create the following file on the host,

$ cat > ~/.local/share/applications/lxd-firefox.desktop[Desktop Entry]
Version=1.0
Name=Firefox in LXD
Comment=Access the Internet through an LXD container
Exec=/usr/bin/lxc exec guiapps -- sudo --login --user ubuntu firefox %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ chmod +x ~/.local/share/applications/lxd-firefox.desktop

We need to make it executable so that it gets picked up and we can then run it by double-clicking.

If it does not appear immediately in the Dash, use your File Manager to locate the directory ~/.local/share/applications/

This is how the icon looks like in a File Manager. The icon comes from the high-contrast set, which now I remember that it means just two colors 🙁

Here is the app on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the app at your fingertips.

I hope the tutorial was useful. We explain the commands in detail. In a future tutorial, we are going to try to figure out how to automate these!

post image

How to run Wine (graphics-accelerated) in an LXD container on Ubuntu

Update #1: Added info about adding the gpu configuration device to the container, for hardware acceleration to work (required for some users).

Update #2: Added info about setting the permissions for the gpu device.

Wine lets you run Windows programs on your GNU/Linux distribution.

When you install Wine, it adds all sort of packages, including 32-bit packages. It looks quite messy, could there be a way to place all those Wine files in a container and keep them there?

This is what we are going to see today. Specifically,

  1. We are going to create an LXD container, called wine-games
  2. We are going to set it up so that it runs graphics-accelerated programs. glxinfo will show the host GPU details.
  3. We are going to install the latest Wine package.
  4. We are going to install and play one of those Windows games.

Creating the LXD container

Let’s create the new container for LXD. If this is the first time you use LXD, have a look at Trying out LXD containers on our Ubuntu.

$ lxc launch ubuntu:x wine-games
Creating wine-games
Starting wine-games
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| wine-games    | RUNNING | 10.0.185.63 (eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called wine-games.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. And glxinfo, that shows details about graphics acceleration. We will know that our set up in Wine works, if both xclock and glxinfo work in the container!

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo apt update
ubuntu@wine-games:~$ sudo apt install x11-apps
ubuntu@wine-games:~$ sudo apt install mesa-utils
ubuntu@wine-games:~$ exit
$

We execute a login shell in the wine-games container as user ubuntu, the default non-root username in Ubuntu LXD images.

Then, we run apt update in order to update the package list and be able to install the subsequent two packages that provide xclock and glxinfo respectively. Finally, we exit the container.

Setting up for graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

First, we run (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command adds a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our wine-games LXD container, and restart the container for the change to take effect.

$ lxc config set wine-games raw.idmap "both $UID 1000"
$ lxc restart wine-games
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Let’s attempt to run xclock in the container.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ xclock
Error: Can't open display: 
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock
Error: Can't open display: :0
ubuntu@wine-games:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add wine-games X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add wine-games Xauthority disk path=/home/ubuntu/.Xauthority source=/home/MYUSERNAME/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add wine-games mygpu gpu
$ lxc config device set wine-games mygpu uid 1000
$ lxc config device set wine-games mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). [UPDATED] In addition, we set the uid/gui of the gpu device to 1000 (the default uid/gid of the first non-root account on Ubuntu; adapt accordingly on other distributions). The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running).

Let’s see what we got now.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock

ubuntu@wine-games:~$ glxinfo 
name of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
ubuntu@wine-games:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@wine-games:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Installing Wine

We install Wine in the container according to the instructions at https://wiki.winehq.org/Ubuntu.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo dpkg --add-architecture i386 
ubuntu@wine-games:~$ wget https://dl.winehq.org/wine-builds/Release.key
--2017-05-01 21:30:14--  https://dl.winehq.org/wine-builds/Release.key
Resolving dl.winehq.org (dl.winehq.org)... 151.101.112.69
Connecting to dl.winehq.org (dl.winehq.org)|151.101.112.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [application/pgp-keys]
Saving to: ‘Release.key’

Release.key                100%[=====================================>]   3.05K  --.-KB/s    in 0s      

2017-05-01 21:30:15 (24.9 MB/s) - ‘Release.key’ saved [3122/3122]

ubuntu@wine-games:~$ sudo apt-key add Release.key
OK
ubuntu@wine-games:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine-games:~$ sudo apt-get update
...
Reading package lists... Done
ubuntu@wine-games:~$ sudo apt-get install --install-recommends winehq-devel
...
Need to get 115 MB of archives.
After this operation, 715 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
ubuntu@wine-games:~$

715MB?!? Sure, bring it on. Whatever is installed in the container, stays in the container! 🙂

Let’s run a game in the container

Here is a game that looks good for our test, Season Match 4. Let’s play it.

ubuntu@wine-games:~$ wget http://cdn.gametop.com/free-games-download/Season-Match4.exe
ubuntu@wine-games:~$ wine Season-Match4.exe 
...
ubuntu@wine-games:~$ cd .wine/drive_c/Program\ Files\ \(x86\)/GameTop.com/Season\ Match\ 4/
ubuntu@wine-games:~/.wine/drive_c/Program Files (x86)/GameTop.com/Season Match 4$ wine SeasonMatch4.exe

Here is the game, and it works.It runs full screen and it is a bit weird to navigate between windows. The animations though are smooth.

We did not set up sound either in this post, nor did we make nice shortcuts so that we can run these apps with a single click. That’s material for a future tutorial!

post image

A closer look at the new ARM64 Scaleway servers and LXD

Update #1: I posted at the Scaleway Linux kernel discussion thread to add support for the Ubuntu Linux kernel and Add new bootscript with stock Ubuntu Linux kernel #349.

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail
2446
ubuntu@scw-ubuntu-arm64:~$

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$

Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

post image

How to initialize LXD again

LXD is the pure-container hypervisor that is pre-installed in Ubuntu 16.04 (or newer) and also available in other GNU/Linux distributions.

When you first configure LXD, you need to make important decisions. Decisions that relate to where you are storing the containers, how big that space will be and also how to set up networking.

In this post we are going to see how to properly clean up LXD with the aim to initialize it again (lxd init).

If you haven’t used LXD at all, have a look at how to set up LXD on your desktop and come back in order to reinitialize together.

Before initializing again, let’s have a look as to what is going on on our system.

What LXD packages have we got installed?

LXD comes in two packages, the lxd package for the hypervisor and the lxd-client for the client utility. There is an extra package, lxd-tools, however this one is not essential at all.

Let’s check which versions we have installed.

$ apt policy lxd lxd-client
lxd:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
lxd-client:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
$ _

I am running Ubuntu 16.04 LTS, currently updated to 16.04.2. The current version of the LXD package is 2.0.9-0ubuntu1~16.04.2. You can see that there is an older version, which was a security update. And an even older version, version 2.0.0, which was the initial version that Ubuntu 16.04 was released with.

There is a PPA that has even more recent versions of LXD (currently at version 2.11), however as it is shown above, we do not have that one enabled here.

We will be uninstalling in a bit those two packages. There is an option to simply uninstall but also to uninstall with –purge. We need to figure out what LXD means in terms of installed files, in order to select whether to purge or not.

How are the containers stored and where are they located?

The containers can be stored either

  1. in subdirectories on the root (/) filesystem. Located at /var/lib/lxd/containers/ You get this when you configure LXD to use the dir storage backend.
  2. in a loop file that is formatted internally with the ZFS filesystem. Located at /var/lib/lxd/containers/zfs.img (or /var/lib/lxd/containers/disks/ in newer versions). You get this when you configure LXD to use the zfs storage backend (on a loop file and not a block device).
  3. in a block device (partition) that is formatted with ZFS (or btrfs). You get this when you configure LXD to use the zfs storage backend (not on a loop file but on a block device).

Let’s see all three cases!

In the following we assume we have a container called mytest, which is running.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.166 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+

Let’s see how it looks depending on the type of the storage backend.

Storage backend: dir

Let’s see the config!

$ lxc config show
config: {}
$ _

We are looking for configuration that refers to storage. We do not see any, therefore, this installation uses the dir storage backend.

Where are the files for the mytest container stored?

$ sudo ls -l /var/lib/lxd/containers/
total 8
drwxr-xr-x+ 4 165536 165536 4096 Μάρ  15 23:28 mytest
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 12
-rw-r--r--  1 root   root   1566 Μάρ   8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536 4096 Μάρ  15 23:28 rootfs
drwxr-xr-x  2 root   root   4096 Μάρ   8 05:16 templates
$ _

Each container can be find in /var/lib/lxd/containers/, in a subdirectory with the same name as the container name.

Inside there, in the rootfs/ directory we can find the filesystem of the container.

Storage backend: zfs

Let’s see how the config looks like!

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$

Okay, we are using ZFS for the storage backend. It is not clear yet whether we are using a loop file or a block device. How do we find that? With zpool status.

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME                    STATE     READ WRITE CKSUM
    lxd                     ONLINE       0     0     0
      /var/lib/lxd/zfs.img  ONLINE       0     0     0

errors: No known data errors

In the above example, the ZFS filesystem is stored in a loop file, located at /var/lib/lxd/zfs.img

However, in the following example,

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    lxd         ONLINE       0     0     0
      sda8      ONLINE       0     0     0

errors: No known data errors

the ZFS filesystem is located in a block device, in /dev/sda8.

Here is how the container files look like with ZFS (either on a loop file or on a block device),

$ sudo ls -l /var/lib/lxd/containers/
total 5
lrwxrwxrwx 1 root   root     34 Mar 15 23:43 mytest -> /var/lib/lxd/containers/mytest.zfs
drwxr-xr-x 4 165536 165536    5 Mar 15 23:43 mytest.zfs
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 4
-rw-r--r--  1 root   root   1566 Mar  8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536   22 Mar 15 23:43 rootfs
drwxr-xr-x  2 root   root      8 Mar  8 05:16 templates
$ mount | grep mytest.zfs
lxd/containers/mytest on /var/lib/lxd/containers/mytest.zfs type zfs (rw,relatime,xattr,noacl)
$ _

How to clean up the storage backend

When we try to run lxd init without cleaning up our storage, we get the following error,

$ lxd init
LXD init cannot be used at this time.
+However if all you want to do is reconfigure the network,
+you can still do so by running "sudo dpkg-reconfigure -p medium lxd"

error: You have existing containers or images. lxd init requires an empty LXD.
$ _

Yep, we need to clean up both the containers and any cached images.

Cleaning up the containers

We are going to list the containers, then stop them, and finally delete them. Until the list is empty.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.205 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+
$ lxc stop mytest
$ lxc delete mytest
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

It’s empty now!

Cleaning up the images

We are going to list the cached images, then delete them. Until the list is empty!

$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | 2cab90c0c342 | no     | ubuntu 16.04 LTS amd64 (release) (20170307) | x86_64 | 146.32MB | Mar 15, 2017 at 10:02pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
$ lxc image delete 2cab90c0c342
$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
$ _

Clearing up the ZFS storage

If we are using ZFS, here is how we clear up the ZFS pool.

First, we need to remove any reference of the ZFS pool from LXD. We just need to unset the configuration directive storage.zfs_pool_name.

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$ lxc config unset storage.zfs_pool_name
$ lxc config show
config: {}
$ _

Then, we can destroy the ZFS pool.

$ sudo zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
lxd   2,78G   664K  2,78G         -     7%     0%  1.00x  ONLINE  -
$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              544K  2,69G    19K  none
lxd/containers    19K  2,69G    19K  none
lxd/images        19K  2,69G    19K  none
$ sudo zpool destroy lxd
$ sudo zpool list
no pools available
$ sudo zfs list
no datasets available
$ _

Running “lxd init” again

At this point we are able to run lxd init again in order to initialize LXD again.

Common errors

Here is a collection of errors that I encountered when running lxd init. These errors should appear if we did not clean up properly as described earlier in this post.

I had been trying lots of variations, including different versions of LXD. You probably need to try hard to get these errors.

error: Provided ZFS pool (or dataset) isn’t empty

Here is how it looks:

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
error: Provided ZFS pool (or dataset) isn't empty
Exit 1

Whaaaat??? Something is wrong. The ZFS pool is not empty? What’s inside the ZFS pool?

$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              642K  14,4G    19K  none
lxd/containers    19K  14,4G    19K  none
lxd/images        19K  14,4G    19K  none

Okay, it’s just the two volumes that are left over. Let’s erase them!

$ sudo zfs destroy lxd/containers
$ sudo zfs destroy lxd/images
$ sudo zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
lxd    349K  14,4G    19K  none
$ _

Nice! Let’s run now lxd init.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.
$ _

That’s it! LXD is freshly configured!

error: Failed to create the ZFS pool: cannot create ‘lxd’: pool already exists

Here is how it looks,

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? yes
Path to the existing block device: /dev/sdb9 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Failed to create the ZFS pool: cannot create 'lxd': pool already exists
$ _

Here we forgot to destroy the ZFS pool called lxd. See earlier in this post on how to destroy the pool so that lxd init can recreate it.

Permission denied, are you in the lxd group?

This is a common error when you first install the lxd package because your non-root account needs to log out and log in again in order to enable the membership to the lxd Unix group.

However, we got this error when we were casually uninstalling and reinstalling the lxd package, and doing nasty tests. Let’s see more details.

$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ groups myusername
myusername : myusername adm cdrom sudo plugdev lpadmin lxd
$ newgrp lxd
$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ _

Whaaat!?! Permission denied and we are asked whether we are in the lxd group? We are members of the lxd group!

Well, the problem is whether the Unix socket that allows non-root users (members of the lxd Unix group) to access LXD has proper ownership.

$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root root 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ sudo chown :lxd /var/lib/lxd/unix.socket 
$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root lxd 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

The group of the Unix socket /var/lib/lxd/unix.socket was not set to the proper value lxd, therefore we set it ourselves. And then the LXD commands work just fine with our non-root user account!

error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open ‘lxd’: dataset does not exist

Here is a tricky error.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: lxd2
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
$ _

We cleaned up the ZFS pool just fine and we are running lxd init. But we got an error relating to the lxd pool that is already gone. Whaat?!?

What happened is that in this case, we forgot to FIRST unset the configuration option in LXD regarding the ZFS pool. We just forget to run lxc config unset storage.zfs_pool_name.

It’s fine then, let’s unset it now and go on with life.

$ lxc config unset storage.zfs_pool_name
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
Exit 1
$ _

Alright, we really messed up!

There are two ways to move forward. One, to rm -fr /var/lib/lxd/ and start over.

The other way is to edit the /var/lib/lxd/lxd.db Sqlite3 file and change the configuration setting from there. Here is how it works,

First, install the sqlitebrowser package and run sudo sqlitebrowser /var/lib/lxd/lxd.db

Second, get to the config table in sqlitebrowser as shown below.

Third, double-click on the value field (which as shown, says lxd) and clear it so it is shown as empty.

Fourth, click on File→Close Database and select to save the database. Let’s see now!

$ lxc config show
config:
  storage.zfs_pool_name: lxd

What?

Fifth, we need to start the LXD service so that LXD will read again the configuration.

$ sudo systemctl restart lxd.service
$ lxc config show
config: {}
$ _

That’s it! We are good to go!

post image

How to install neo4j in an LXD container on Ubuntu

Neo4j is a different type of database, is a graph database. It is quite cool and it is worth to spend the time to learn how it works.

The main benefit of a graph database is that the information is interconnected as a graph, and allows to execute complex queries very quickly.

One of the sample databases in Neo4j is (big part of) the content of IMDb.com (the movie database). Here is the description of some possible queries:

  • Find actors who worked with Gene Hackman, but not when he was also working with Robin Williams in the same movie.
  • Who are the five busiest actors?
  • Return the count of movies in which an actor and director have jointly worked

In this post

  1. we install Neo4j in an LXD container on Ubuntu (or any other GNU/Linux distribution that has installation packages)
  2. set it up so we can access Neo4j from our Ubuntu desktop browser
  3. start the cool online tutorial for Neo4j, which you can complete on your own
  4. remove the container (if you really wish!) in order to clean up the space

Creating an LXD container

See Trying out LXD containers on our Ubuntu in order to make the initial (one-time) configuration of LXD on your Ubuntu  desktop.

Then, let’s start with creating a container for neo4j:

$ lxc launch ubuntu:x neo4j
Creating neo4j
Starting neo4j
$ _

Here we launched a container named neo4j, that runs Ubuntu 16.04 (Xenial, hence ubuntu:x).

Let’s see the container details:

$ lxc list
+----------+---------+----------------------+------+------------+-----------+
| NAME     | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
| neo4j    | RUNNING | 10.60.117.91 (eth0)  |      | PERSISTENT | 0         |
+----------+---------+----------------------+------+------------+-----------+
$ _

It takes a few seconds for a new container to launch. Here, the container is in the RUNNING state, and also has a private IP address. It’s good to go!

Connecting to the LXD container

Let’s get a shell in the new neo4j LXD container.

$ lxc exec neo4j -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@neo4j:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] 
Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] 
Get:5 http://archive.ubuntu.com/ubuntu xenial/main Sources [868 kB] 
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main Sources [61.1 kB]
Get:7 http://security.ubuntu.com/ubuntu xenial-security/restricted Sources [2,288 B]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [4,808 B] 
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe Sources [7,728 kB] 
Get:10 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [20.9 kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/multiverse Sources [1,148 B]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [219 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [92.0 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [79.1 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [43.9 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/multiverse Sources [179 kB] 
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [234 kB] 
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2,688 B] 
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [134 kB] 
Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse Sources [4,556 B] 
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [485 kB] 
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [193 kB] 
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [411 kB] 
Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [155 kB] 
Get:25 http://archive.ubuntu.com/ubuntu xenial-backports/main Sources [3,200 B] 
Get:26 http://archive.ubuntu.com/ubuntu xenial-backports/universe Sources [1,868 B] 
Get:27 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [4,672 B] 
Get:28 http://archive.ubuntu.com/ubuntu xenial-backports/main Translation-en [3,200 B] 
Get:29 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [2,512 B] 
Get:30 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [1,216 B] 
Fetched 11.2 MB in 8s (1,270 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@neo4j:~$ exit
logout
$ _

The command we used to get a shell is this, sudo –login –user ubuntu

We instructed to execute (exec) in the neo4j container the command that appears after the separator.

The images for the LXD containers have both a root account and also a user account, in the Ubuntu images is called ubuntu. Both accounts are locked (no default password is available). lxc exec runs the commands in the containers as root, therefore, the sudo –login –user ubuntu command would obviously run without asking passwords. This sudo command creates a login shell for the specified user, user ubuntu.

Once we are connected to the container as user ubuntu, we can then run commands as root simply by using sudo in front of them. Since user ubuntu is in /etc/sudoers, no password is asked. That is the reason why sudo apt update was ran earlier without asking a password.

The Ubuntu LXD containers auto-update themselves by running unattended-upgrade, which means that we do not need to run sudo apt upgrade. We do run sudo apt update just to get an updated list of available packages and avoid any errors when installing, just because the package list was changed recently.

After we updated the package list, we exit the container with exit.

Installing neo4j

This is the download page for Neo4j, https://neo4j.com/download/ and we click to get the community edition.

We download Neo4j (the Linux(tar)) on our desktop. When we tried this, version 3.1.1 was available.

We downloaded the file and it can be found in ~/Downloads/ (or the localized name). Let’s copy it to the container,

$ cd ~Downloads/
$ ls -l neo4j-community-3.1.1-unix.tar.gz 
-rw-rw-r-- 1 user user 77401077 Mar 1 16:04 neo4j-community-3.1.1-unix.tar.gz
$ lxc file push neo4j-community-3.1.1-unix.tar.gz neo4j/home/ubuntu/
$ _

The tarball is about 80MB and we use lxc file push to copy it inside the neo4j container, in the directory /home/ubuntu/. Note that neo4j/home/ubuntu/ ends with a / character to specify that it is a directory. If you omit this, you get an error.

Let’s deal with the tarball inside the container,

$ lxc exec neo4j -- sudo --login --user ubuntu
ubuntu@neo4j:~$ ls -l
total 151401
-rw-rw-r-- 1 ubuntu ubuntu 77401077 Mar  1 12:00 neo4j-community-3.1.1-unix.tar.gz
ubuntu@neo4j:~$ tar xfvz neo4j-community-3.1.1-unix.tar.gz 
neo4j-community-3.1.1/
neo4j-community-3.1.1/bin/
neo4j-community-3.1.1/data/
neo4j-community-3.1.1/data/databases/

[...]
neo4j-community-3.1.1/lib/mimepull-1.9.3.jar
ubuntu@neo4j:~$

The files are now in the container, let’s run this thing!

Running Neo4j

The commands to manage Neo4j are in the bin/ subdirectory,

ubuntu@neo4j:~$ ls -l neo4j-community-3.1.1/bin/
total 24
-rwxr-xr-x 1 ubuntu ubuntu 1624 Jan  5 12:03 cypher-shell
-rwxr-xr-x 1 ubuntu ubuntu 7454 Jan 17 17:52 neo4j
-rwxr-xr-x 1 ubuntu ubuntu 1180 Jan 17 17:52 neo4j-admin
-rwxr-xr-x 1 ubuntu ubuntu 1159 Jan 17 17:52 neo4j-import
-rwxr-xr-x 1 ubuntu ubuntu 5120 Jan 17 17:52 neo4j-shared.sh
-rwxr-xr-x 1 ubuntu ubuntu 1093 Jan 17 17:52 neo4j-shell
drwxr-xr-x 2 ubuntu ubuntu    4 Mar  1 12:02 tools
ubuntu@neo4j:~$

According to Running Neo4j, we need to run “neo4j start“. Let’s do it.

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
ERROR: Unable to find Java executable.
* Please use Oracle(R) Java(TM) 8, OpenJDK(TM) or IBM J9 to run Neo4j Server.
* Please see http://docs.neo4j.org/ for Neo4j Server installation instructions.
ubuntu@neo4j:~$

We need Java, and the documentation actually said so. We just need the headless JDK, since we are accessing the UI from our browser.

ubuntu@neo4j:~$ sudo apt install default-jdk-headless
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 ca-certificates-java default-jre-headless fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1 libfreetype6 libjpeg-turbo8 libjpeg8
 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libpcsclite1 libxi6 libxrender1 libxtst6 openjdk-8-jdk-headless openjdk-8-jre-headless x11-common
[...]
Setting up default-jdk-headless (2:1.8-56ubuntu2) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@neo4j:~$

Now, we are ready to start Neo4j!

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
Starting Neo4j.
Started neo4j (pid 4123). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
nohup: ignoring input
2017-03-01 12:22:41.060+0000 INFO  No SSL certificate found, generating a self-signed certificate..
2017-03-01 12:22:41.517+0000 INFO  Starting...
2017-03-01 12:22:41.914+0000 INFO  Bolt enabled on localhost:7687.
2017-03-01 12:22:43.622+0000 INFO  Started.
2017-03-01 12:22:44.375+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$

So, Neo4j is running just fine, but it has been bound on the localhost which makes it inaccessible to our desktop browser. Let’s verify again,

ubuntu@neo4j:~$ sudo lsof -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4123 ubuntu  210u  IPv6 121981      0t0  TCP localhost:7687 (LISTEN)
java     4123 ubuntu  212u  IPv6 121991      0t0  TCP localhost:7474 (LISTEN)
java     4123 ubuntu  220u  IPv6 121072      0t0  TCP localhost:7473 (LISTEN)
ubuntu@neo4j:~$

What we actually need, is for Neo4j to bind to all network interfaces so that it becomes accessible to our desktop browser. Being in a container, the all network interfaces means to bind the only other interface, the private network that LXD created for us:

ubuntu@neo4j:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3e:48:b7:85  
          inet addr:10.60.117.21  Bcast:10.60.117.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe48:b785/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30029 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17553 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:51824701 (51.8 MB)  TX bytes:1193503 (1.1 MB)
ubuntu@neo4j:~$

Where do we look in the configuration files of Neo4j to get it to bind to all network interfaces?

We look at the Neo4j documentation on Configuring the connectors, and we see that we need to edit the configuration file neo4j-community-3.1.1/conf/neo4j.conf

We can see that there is an overall configuration parameter for the connectors, and we can set the default_listen_address to 0.0.0.0. This 0.0.0.0 in networking terms means that we want the process to bind to all networking interfaces. Let’s remind us here that in our case of LXD containers that reside in private networking, this is OK.

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
#dbms.connectors.default_listen_address=0.0.0.0

to

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0

Let’s restart Neo4j and check whether it looks OK:

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j restart
Stopping Neo4j.. stopped
Starting Neo4j.
Started neo4j (pid 4711). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
2017-03-01 12:57:52.839+0000 INFO  Started.
2017-03-01 12:57:53.624+0000 INFO  Remote interface available at http://localhost:7474/
2017-03-01 13:01:58.566+0000 INFO  Neo4j Server shutdown initiated by request
2017-03-01 13:01:58.575+0000 INFO  Stopping...
2017-03-01 13:01:58.620+0000 INFO  Stopped.
nohup: ignoring input
2017-03-01 13:02:00.088+0000 INFO  Starting...
2017-03-01 13:02:01.310+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2017-03-01 13:02:02.928+0000 INFO  Started.
2017-03-01 13:02:03.599+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$ sudo lsof -n -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4711 ubuntu  210u  IPv6 153406      0t0  TCP *:7687 (LISTEN)
java     4711 ubuntu  212u  IPv6 153415      0t0  TCP *:7474 (LISTEN)
java     4711 ubuntu  220u  IPv6 153419      0t0  TCP *:7473 (LISTEN)
ubuntu@neo4j:~$

The log messages still say that Neo4j is accessible at http://localhost:7474/, which is factually correct. However, lsof shows us that it is boung to all network interfaces (the * means all).

Loading up Neo4j in the browser

We know already that in our case, the private IP address of the neo4j LXD container is 10.60.117.21. Let’s visit http://10.60.117.21:7474/ on our desktop Web browser!

It works! It asks us to log in using the default username neo4j with the default password neo4j. Then, it will ask us to change the password to something else and we are presented with the initial page of Neo4j,

The $ ▊ prompt is there for you to type instructions. According to the online tutorial at https://neo4j.com/graphacademy/online-training/introduction-graph-databases/

you can start the tutorial if you type :play movie graph and press the Run button. Therefore, load on one browser tab the tutorial and on other tab run the commands in the neo4j server from the LXD container!

Once you are done

Once you have completed the tutorial, you can keep the container in order to try out more tutorials and learn more about neo4j.

However, if you want to remove this LXD container, it can be done by running:

$ lxc stop neo4j
$ lxc delete neo4j
$ lxc list
+----------+---------+----------------------+------+------------+-----------+
|   NAME   |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
+----------+---------+----------------------+------+------------+-----------+
$ _

That’s it. The container is gone and LXD is ready for you to follow more LXD tutorials and create more containers!

post image

How to install LXD/LXC containers on Ubuntu on cloudscale.ch

In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.

cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.

In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.

We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.

Set up the VPS

cloudscale1

We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).

The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.

cloudscale2

We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).

cloudscale4

Here is the summary of the freshly launched VPS server. The IP address is shown as well.

Connect and update the VPS

In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.

myusername@mycomputer:~$ ssh ubuntu@5.102.145.245
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@myubuntuserver:~$

Let’s update the package list,

ubuntu@myubuntuserver:~$ sudo apt update
Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
...
Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B]
Fetched 10.5 MB in 2s (4707 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
67 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@myubuntuserver:~$ sudo apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
ubuntu@myubuntuserver:~$

In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.

Configure LXD/LXC

Let’s see how much free disk space is there,

ubuntu@myubuntuserver:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  9.7G 1.2G  8.6G 12%  /
ubuntu@myubuntuserver:~$

There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.

ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@myubuntuserver:~$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: myzfspool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...accept the network autoconfiguration settings that you will be asked...
LXD has been successfully configured.
ubuntu@myubuntuserver:~$

That’s it! We are good to go and configure our first LXD/LXC container.

Testing a container as a Web server

Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.

ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web
Creating web
Retrieving image: 100%
Starting web
ubuntu@myubuntuserver:~$

We launched a container called web.

Let’s connect to the container, update the package list and upgrade any available packages.

ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash
root@web:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
9 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@web:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@web:~#

Still inside the container, we install nginx.

root@web:~# apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@web:~#

Let’s make a small change in the default index.html,

root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html
--- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000
+++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000
@@ -1,7 +1,7 @@
 <!DOCTYPE html>
 <html>
 <head>
-<title>Welcome to nginx!</title>
+<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title>
 <style>
 body {
 width: 35em;
@@ -11,7 +11,7 @@
 </style>
 </head>
 <body>
-<h1>Welcome to nginx!</h1>
+<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1>
 <p>If you see this page, the nginx web server is successfully installed and
 working. Further configuration is required.</p>
 
root@web:/var/www/html#

Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.

root@web:/var/www/html# exit
ubuntu@myubuntuserver:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE   | IPV4                | IPV6 | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web  | RUNNING | 10.5.242.156 (eth0) |      | PERSISTENT | 0         |
+------+---------+---------------------+------+------------+-----------+
ubuntu@myubuntuserver:~$ ifconfig ens3
ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c 
 inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0
 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0
 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB)

ubuntu@myubuntuserver:~$

Therefore, the iptables command that will allow access to the container is,

ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80
ubuntu@myubuntuserver:~$

Here is the result when we visit the new Web server from our computer,

cloudscale-nginx

Benchmarks

We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.

CPU

We are benchmarking the CPU using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 10000


Test execution summary:
 total time: 10.9448s
 total number of events: 10000
 total time taken by event execution: 10.9429
 per-request statistics:
 min: 0.96ms
 avg: 1.09ms
 max: 2.79ms
 approx. 95 percentile: 1.27ms

Threads fairness:
 events (avg/stddev): 10000.0000/0.00
 execution time (avg/stddev): 10.9429/0.00

ubuntu@myubuntuserver:~$

The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.

Memory

We are benchmarking the memory using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing memory operations speed test
Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write
Memory scope type: global
Threads started!
Done.

Operations performed: 104857600 (1768217.45 ops/sec)

102400.00 MB transferred (1726.77 MB/sec)


Test execution summary:
 total time: 59.3013s
 total number of events: 104857600
 total time taken by event execution: 47.2179
 per-request statistics:
 min: 0.00ms
 avg: 0.00ms
 max: 0.80ms
 approx. 95 percentile: 0.00ms

Threads fairness:
 events (avg/stddev): 104857600.0000/0.00
 execution time (avg/stddev): 47.2179/0.00

ubuntu@myubuntuserver:~$

The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.

Disk

We are benchmarking the disk using dd with the following parameters.

ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync
1024+0 records in
1024+0 records out
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s
ubuntu@myubuntuserver:~$

 

 

It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.

ZFS pool free space

Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  811M 4,18G        -  11% 15% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Again, after a second container was just created, (new and empty)

ubuntu@myubuntuserver:~$ sudo zpool list
NAME       SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
myzfspool 4,97G  822M 4,17G        -  11% 16% 1.00x ONLINE       -
ubuntu@myubuntuserver:~$

Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.

Conclusion

We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.

We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.

Finally, we see some benchmarks for the vCPU, the memory and the disk.

post image

How to set up multiple secure (SSL/TLS, Qualys SSL Labs A+) websites using LXD containers

In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.

In this post, we are going to

  1. Create multiple websites, each in a separate LXD container
  2. Install HAProxy as a TLS Termination Proxy, in an LXD container
  3. Configure HAProxy so that each website is only accessible through TLS
  4. Perform the SSL Server Test so that our websites really get the A+!

In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.

The requirements are

Set up a VPS

We are using DigitalOcean in this example.

do-create-droplet-16041

Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.

We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!

In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.

Let’s click on the Create droplet button and the VPS is created!

Initial configuration

We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.

https://blog.simos.info/trying-out-lxd-containers-on-ubuntu-on-digitalocean/

Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».

Creating the containers

We create three containers for three websites, plus one container for HAProxy.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1
Creating web1
Retrieving image: 100%
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2
Creating web2

real    0m6.620s
user    0m0.016s
sys    0m0.004s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3
Creating web3

real    1m15.723s
user    0m0.012s
sys    0m0.020s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy
Creating haproxy

real    0m48.747s
user    0m0.012s
sys    0m0.012s
ubuntu@ubuntu-512mb-ams3-01:~$

Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.

Let’s start the containers up!

ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy
ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$

You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.

DNS configuration

The public IP address of this specific VPS is 188.166.10.229. For this test, I am using the domain ubuntugreece.xyz as follows:

  1. Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP 188.166.10.229
  2. Container web2: web2.ubuntugreece.xyz has IP 188.166.10.229
  3. Container web3: web3.ubuntugreece.xyz has IP 188.166.10.229

Here is how it looks when configured on a DNS management console,

namecheap-configuration-containers

From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.

ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz
ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz
Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN)
ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz
web3.ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$

These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.

We can continue! (and ignore for now web2)

Web server configuration

Let’s see the configuration for web1. You must repeat the following for web2 and web3.

We install the nginx web server,

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done

Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
root@web1:~#

nginx needs to be configured so that it understands the domain name for web1. Here is the diff,

diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default
index a761605..b2cea8f 100644
--- a/etc/nginx/sites-available/default
+++ b/etc/nginx/sites-available/default
@@ -38,7 +38,7 @@ server {
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
 
-       server_name _;
+       server_name ubuntugreece.xyz www.ubuntugreece.xyz;
 
        location / {
                # First attempt to serve request as file, then

and finally we restart nginx and exit the web1 container,

root@web1:/etc/nginx/sites-enabled# systemctl restart nginx
root@web1:/etc/nginx/sites-enabled# exit
exit
ubuntu@ubuntu-512mb-ams3-01:~$

Forwarding connections to the HAProxy container

We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.

ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 04:01:36:50:00:01  
          inet addr:188.166.10.229  Bcast:188.166.63.255  Mask:255.255.192.0
          inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360767509 (360.7 MB)  TX bytes:3863846 (3.8 MB)

ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 80 -j DNAT --to-destination 10.234.150.39:80
[sudo] password for ubuntu: 
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 443 -j DNAT --to-destination 10.234.150.39:443
ubuntu@ubuntu-512mb-ams3-01:~$

If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).

HAProxy initial configuration

Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash
root@haproxy:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@haproxy:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@haproxy:~# apt install haproxy
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
root@haproxy:~#

We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.

diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg
index 86da67d..f6f2577 100644
--- a/etc/haproxy/haproxy.cfg
+++ b/etc/haproxy/haproxy.cfg
@@ -18,11 +18,17 @@ global
     ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
     ssl-default-bind-options no-sslv3
 
+        # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
+        # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
+        tune.ssl.default-dh-param 2048
+
 defaults
     log    global
     mode    http
     option    httplog
     option    dontlognull
+        option  forwardfor
+        option  http-server-close
         timeout connect 5000
         timeout client  50000
         timeout server  50000
@@ -33,3 +39,56 @@ defaults
     errorfile 502 /etc/haproxy/errors/502.http
     errorfile 503 /etc/haproxy/errors/503.http
     errorfile 504 /etc/haproxy/errors/504.http
+
+# Configuration of the frontend (HAProxy as a TLS Termination Proxy)
+frontend www_frontend
+    # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
+    bind *:80
+    # We bind on port 443 (https) and specify a directory with the certificates.
+####    bind *:443 ssl crt /etc/haproxy/certs/
+    # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
+####    redirect scheme https if !{ ssl_fc }
+    # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx
+    # that there was a TLS Termination Proxy. Required for WordPress and other CMS.
+    reqadd X-Forwarded-Proto:\ https
+
+    # Distinguish between secure and insecure requestsa (used in next two lines)
+    acl secure dst_port eq 443
+
+    # Mark all cookies as secure if sent over SSL
+    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
+
+    # Add the HSTS header with a 1 year max-age
+    rspadd Strict-Transport-Security:\ max-age=31536000 if secure
+
+    # Configuration for each virtual host (uses Server Name Indication, SNI)
+    acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz
+    acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz
+    acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz
+
+    # Directing the connection to the correct LXD container
+    use_backend web1_cluster if host_ubuntugreece_xyz
+    use_backend web2_cluster if host_web2_ubuntugreece_xyz
+    use_backend web3_cluster if host_web3_ubuntugreece_xyz
+
+# Configuration of the backend (HAProxy as a TLS Termination Proxy)
+backend web1_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web1", directs to container "web1.lxd" (hostname).
+    server web1 web1.lxd:80 check
+
+backend web2_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web2", directs to container "web2.lxd" (hostname).
+    server web2 web2.lxd:80 check
+
+backend web3_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web3", directs to container "web3.lxd" (hostname).
+    server web3 web3.lxd:80 check

Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.

root@haproxy:~# systemctl restart haproxy
root@haproxy:~# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Does it work? Let’s visit the website,

do-ubuntugreece

It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.

Get certificates from Let’s Encrypt

We exit out to the VPS and install letsencrypt.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt
[sudo] password for ubuntu: 
Reading package lists... Done
...
Setting up python-pyicu (1.9.2-2build1) ...
ubuntu@ubuntu-512mb-ams3-01:~$

We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz
... they ask for a contact e-mail address and whether we accept the Terms of Service...

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to xxxxx@gmail.com.
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert
   will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$

For completeness, here are the command lines for the other two websites,

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le


real    0m18.458s
user    0m0.852s
sys    0m0.172s
ubuntu@ubuntu-512mb-ams3-01:~$

Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!

We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$

HAProxy final configuration

We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash
root@haproxy:~# vi /etc/haproxy/haproxy.cfg 

haproxy-config-ok
root@haproxy:/etc/haproxy# systemctl restart haproxy
root@haproxy:/etc/haproxy# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Let’s test them!

Here are the three websites, notice the padlocks on all three of them,

The SSL Server Report (Qualys)

Here are the SSL Server Reports for each website,

You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.

Results

The disk space requirements for those four containers (three static websites plus haproxy) are

ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list
[sudo] password for ubuntu: 
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool-lxd  14.9G  1.13G  13.7G         -     4%     7%  1.00x  ONLINE  -
ubuntu@ubuntu-512mb-ams3-01:~$

The four containers required a bit over 1GB of disk space.

The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.

ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory"
[  181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child
[  183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child
[  190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child
[  848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child
[  860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child
[  878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child
ubuntu@ubuntu-512mb-ams3-01:~$

There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.

Summary

We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.

We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.

We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.

The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.

 

post image

Playing around with LXD containers (LXC) on Ubuntu

We have set up LXD on either our personal computer or on the cloud (like DigitalOcean and Scaleway). Actually, we can even try LXD online for free at https://linuxcontainers.org/lxd/try-it/

What shall we do next?

Commands through “lxc”

Below we see a series of commands that start with lxc, then we add an action and finally we add any parameters. lxc here is the program that does the communication with the LXD service and performs the actions that we request. That is,

lxc action parameters

There are also a series of commands that are specific to a type of object. In that case, we add in the the object type and continue with the action and the parameters.

lxc object action parameters

List the available containers

Let’s use the list action, which lists the available containers.

ubuntu@myvps:~$ lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
ubuntu@myvps:~$
The first time you run lxc list, it creates a client certificate (installs it in ~/.config/lxc/). It takes a few seconds and this process takes place only once.
The command also advices us to run sudo lxd init (note: lxd) if we haven’t done so before. Consult the configuration posts if in doubt here.
In addition, this command also suggests us on how to start (launch) our first container.
Finally, it shows the list of available containers on this computer, which is empty (because we have not created any yet).

List the locally available images for containers

Let’s use the image object, and then the list action, which lists the available (probably cached) images that are hosted by our LXD service.

ubuntu@myvps:~$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
ubuntu@myvps:~$

There are no locally available images yet, so the list is empty.

List the remotely available images for containers

Let’s use the image object, and then the list action, and finally a remote repository specifier (ubuntu:) in order to list some publicly available images that we can use to create containers.
ubuntu@myvps:~$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| p (5 more)         | 6b6fa83dacb0 | yes    | ubuntu 12.04 LTS amd64 (release) (20160627)     | x86_64  | 155.43MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| p/armhf (2 more)   | 06604b173b99 | yes    | ubuntu 12.04 LTS armhf (release) (20160627)     | armv7l  | 135.90MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+    
...
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x (5 more)         | f452cda3bccb | yes    | ubuntu 16.04 LTS amd64 (release) (20160627)     | x86_64  | 138.23MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x/arm64 (2 more)   | 46b365e258a0 | yes    | ubuntu 16.04 LTS arm64 (release) (20160627)     | aarch64 | 146.72MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x/armhf (2 more)   | 22f668affe3d | yes    | ubuntu 16.04 LTS armhf (release) (20160627)     | armv7l  | 148.18MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
...
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
|                    | 4c6f7b94e46a | yes    | ubuntu 16.04 LTS s390x (release) (20160516.1)   | s390x   | 131.07MB | May 16, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
|                    | ddfa8f2d4cfb | yes    | ubuntu 16.04 LTS s390x (release) (20160610)     | s390x   | 131.41MB | Jun 10, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
ubuntu@myvps:~$
The repository ubuntu: is a curated list of containers from Canonical, and has all sorts of Ubuntu versions (from 12.04 or newer) and architectures (like x86_64, ARM and even S390x).
The first column is the nickname or alias. Ubuntu 16.04 LTS for x86_64 has the alias x, so we can use that or we can specify the fingerprint (here: f452cda3bccb).

Show information for a remotely available image for containers

Let’s use the image object, and then the list action, and finally a remote image specifier (ubuntu:x) in order to get info out of a specific publicly available image that we can use to create containers.
ubuntu@myvps:~$ lxc image info ubuntu:x
    Uploaded: 2016/06/27 00:00 UTC                                                                                                                           
    Expires: 2021/04/21 00:00 UTC                                                                                                                            

Properties:                                                                                                                                                  
    aliases: 16.04,x,xenial                                                                                                                                  
    os: ubuntu                                                                                                                                               
    release: xenial                                                                                                                                          
    version: 16.04                                                                                                                                           
    architecture: amd64                                                                                                                                      
    label: release                                                                                                                                           
    serial: 20160627                                                                                                                                         
    description: ubuntu 16.04 LTS amd64 (release) (20160627)                                                                                                 

Aliases:                                                                                                                                                     
    - 16.04                                                                                                                                                  
    - 16.04/amd64                                                                                                                                            
    - x                                                                                                                                                      
    - x/amd64                                                                                                                                                
    - xenial                                                                                                                                                 
    - xenial/amd64                                                                                                                                           

Auto update: disabled           
ubuntu@myvps:~$

Here we can see the full list of aliases for the 16.04 image (x86_64). The simplest of all, is x.

Life cycle of a container

Here is the life cycle of a container. First you initialize the image, thus creating the (stopped) container. Then you can start and stop it. Finally, in the stopped state, you may select to delete it.

LifecycleLXD

 We initialise a container with Ubuntu 16.04 (ubuntu:x) and give the name mycontainer. Since we do not have yet any locally cached images, this one is downloaded and cached for us. If we need another container with Ubuntu 16.04, it will be prepared instantly since it is already cached localy.
When we initialise a container from an image, it gets the STOPPED state. When we start it, it gets into the RUNNING state.
When we start a container, the runtime (or rootfs) is booted up and may take a few seconds until the network is up and running. Below we can see that it took a few seconds until the container managed to get the IPv4 IP address through DHCP from LXD.
We can install web servers and other services into the container. Here, we just execute a BASH shell in order to get shell access inside the container and run the uname command.
We promptly exit from the container and stop it.
Then, we delete the container and verify that it has been delete (it is not shown in lxc list).
Finally, we also verify that the image is still cached locally on LXD, waiting for the next creation of a container.
Here are the commands,
ubuntu@myvps:~$ lxc init ubuntu:x mycontainer
Creating mycontainer                                                                                                                                         
Retrieving image: 100%                                                                                                                                       
ubuntu@myvps:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
ubuntu@myvps:~$ lxc list
+-------------+---------+------+------+------------+-----------+                                                                                             
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
ubuntu@myvps:~$ lxc start mycontainer
ubuntu@myvps:~$ lxc list     
+-------------+---------+------+-----------------------------------------------+------------+-----------+                                                    
|    NAME     |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |                                                    
+-------------+---------+------+-----------------------------------------------+------------+-----------+                                                    
| mycontainer | RUNNING |      | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                                    
+-------------+---------+------+-----------------------------------------------+------------+-----------+
ubuntu@myvps:~$ lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
| mycontainer | RUNNING | 10.200.214.147 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
ubuntu@myvps:~$ lxc exec mycontainer -- /bin/bash       
root@mycontainer:~# uname -a
Linux mycontainer 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux                                        
root@mycontainer:~# exit
exit
ubuntu@myvps:~$ lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
| mycontainer | RUNNING | 10.200.214.147 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
ubuntu@myvps:~$ lxc stop mycontainer
ubuntu@myvps:~$ lxc list
+-------------+---------+------+------+------------+-----------+                                                                                             
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
+-------------+---------+------+------+------------+-----------+       
ubuntu@myvps:~$ lxc delete mycontainer
ubuntu@myvps:~$ lxc list
+------+-------+------+------+------+-----------+                                                                                                            
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |                                                                                                            
+------+-------+------+------+------+-----------+                                                                                                            
ubuntu@myvps:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
ubuntu@myvps:~$

Some tutorials mention the launch action, which does both init and start. Here is how the command would have looked like,

lxc launch ubuntu:x mycontainer

We are nearing the point where we can start doing interesting things with containers. Let’s see the next blog post!

post image

How to install LXD containers on Ubuntu on Scaleway

Scaleway, a subsidiary of Online.net, does affordable VPSes and baremetal ARM servers. They became rather well-known when they first introduced those ARM servers.

When you install Ubuntu 16.04 on a Scaleway VPS, it requires some specific configuration (compile ZFS as DKMS module) in order to get LXD. In this post, we see those additional steps to get LXD up and running on a Scaleway VPS.

An issue with Scaleway is that they heavily modify the config of the Linux kernel and you do not get the stock Ubuntu kernel when you install Ubuntu 16.04. There is a feature request to get ZFS compiled into the kernel, at https://community.online.net/t/feature-request-zfs-support/2709/3 Most probably it will take some time to get added.

In this post I do not cover the baremetal ARM or the newer x86 dedicated servers; there is an additional error there in trying to use LXD, an error about not being able to create a sparse file.

Creating a VPS on Scaleway

Once we create an account on Scaleway (we also add our SSH public key), we click to create a VC1 server with the default settings.

scaleway-vc1

There are several types of VPS, we select the VC1 which comes with 2 x86 64-bit cores, 2GB memory and 50GB disk space.

scaleway-do-no-block-SMTP

Under Security, there is a default policy to disable «SMTP». These are firewall rules drop packets destined to ports 25, 465 and 587. If you intend to use SMTP at a later date, it makes sense to disable this security policy now. Otherwise, once you get your VPS running, it takes about 30+30 minutes of downtime to archive and restart your VPS in order for this change to take effect.

scaleway-provisioning

Once you click Create, it takes a couple of minutes for the provisioning, for the kernel to start and then booting of the VPS.

After the creation, the administrative page shows the IP address that we need to connect to the VPS.

Initial package updates and upgrades

$ ssh root@163.172.132.19
The authenticity of host '163.172.132.19 (163.172.132.19)' can't be established.
ECDSA key fingerprint is SHA256:Z4LMCnXUyuvwO16HI763r4h5+mURBd8/4u2bFPLETes.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '163.172.132.19' (ECDSA) to the list of known hosts.
 _
 ___ ___ __ _| | _____ ____ _ _ _
/ __|/ __/ _` | |/ _ \ \ /\ / / _` | | | |
\__ \ (_| (_| | | __/\ V V / (_| | |_| |
|___/\___\__,_|_|\___| \_/\_/ \__,_|\__, |
 |___/

Welcome on Ubuntu Xenial (16.04 LTS) (GNU/Linux 4.5.7-std-3 x86_64 )

System information as of: Wed Jul 13 19:46:53 UTC 2016

System load: 0.02 Int IP Address: 10.2.46.19 
Memory usage: 0.0% Pub IP Address: 163.172.132.19
Usage on /: 3% Swap usage: 0.0%
Local Users: 0 Processes: 83
Image build: 2016-05-20 System uptime: 3 min
Disk nbd0: l_ssd 50G

Documentation: https://scaleway.com/docs
Community: https://community.scaleway.com
Image source: https://github.com/scaleway/image-ubuntu


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@scw-test:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB]
...
Reading package lists... Done
Building dependency tree 
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@scw-test:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
 libpython3.5
The following packages will be upgraded:
 apt apt-utils base-files bash bash-completion bsdutils dh-python gcc-5-base
 grep init init-system-helpers libapt-inst2.0 libapt-pkg5.0 libblkid1
 libboost-iostreams1.58.0 libboost-random1.58.0 libboost-system1.58.0
 libboost-thread1.58.0 libexpat1 libfdisk1 libgnutls-openssl27 libgnutls30
 libldap-2.4-2 libmount1 libnspr4 libnss3 libnss3-nssdb libpython2.7-minimal
 libpython2.7-stdlib librados2 librbd1 libsmartcols1 libstdc++6 libsystemd0
 libudev1 libuuid1 lsb-base lsb-release mount python2.7 python2.7-minimal
 systemd systemd-sysv tzdata udev util-linux uuid-runtime vim vim-common
 vim-runtime wget
51 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 27.6 MB of archives.
After this operation, 5,069 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 base-files amd64 9.4ubuntu4.1 [68.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 bash amd64 4.3-14ubuntu1.1 [583 kB]
...
Setting up librados2 (10.2.0-0ubuntu0.16.04.2) ...
Setting up librbd1 (10.2.0-0ubuntu0.16.04.2) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@scw-test:~#

Installing ZFS as a DKMS module

There are instructions on how to install ZFS as a DKMS module at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module

First, we install the build-essential package,

root@scw-test:~# apt install build-essential

Second, we run the script that is provided at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module It takes about a minute for this script to run; it downloads the kernel source and prepares the modules for compilation.

Third, we install the zfsutils-linux package as usual. In this case, it takes more time to install, as it needs to recompile the ZFS modules.

root@scw-test:~# apt install zfsutils-linux

This step takes lots of time. Eight and a half minutes!

Installing the LXD package

The final step is to install the LXD package

root@scw-test:~# apt install lxd

Initial configuration of LXD

A VPS at Scaleway does not have access to a separate block device (the dedicated servers do). Therefore, we are creating the ZFS filesystem in a loopback device.

root@scw-test:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda 46G 2.1G 42G 5% /

We have 42GB of free space, therefore let’s allocate 36GB for the ZFS filesystem.

root@scw-test:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: mylxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 36
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...we accept the defaults in creating the LXD bridge...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@scw-test:~#

 

Create a user to manage LXD

We create a non-root user to manage LXD. It is advised to create such a user and refrain from using root for such tasks.

root@scw-test:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: *******
Retype new UNIX password: *******
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y
root@scw-test:~#

Then, let’s add this user ubuntu to the sudo (ability to run sudo) and lxd (manage LXD containers) groups,

root@scw-test:~# adduser ubuntu sudo         # For scaleway. For others, the name might be 'admin'.
root@scw-test:~# adduser ubuntu lxd

Finally, let’s restart the VPS. Although it is not necessary, it is a good practice in order to make sure that lxd starts automatically even with ZFS being compiled through DKMS. A shutdown -r now would suffice to restart the VPS. After about 20 seconds, we can ssh again, as the new user ubuntu.

Let’s start up a container

We log in as this new user ubuntu (or, sudo su – ubuntu).

ubuntu@scw-test:~$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Retrieving image: 100%
Starting mycontainer
ubuntu@scw-test:~$ lxc list
+-------------+---------+------+------+------------+-----------+
| NAME        | STATE   | IPV4 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT |         0 |
+-------------+---------+------+------+------------+-----------+
ubuntu@scw-test:~$ lxc list
+-------------+---------+----------------------+------+------------+-----------+
| NAME        | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+----------------------+------+------------+-----------+
| mycontainer | RUNNING | 10.181.132.19 (eth0) |      | PERSISTENT | 0         |
+-------------+---------+----------------------+------+------------+-----------+
ubuntu@scw-test:~$

We launched an Ubuntu 16.04 LTS (Xenial: “x”) container, and then we listed the details. It takes a few moments for the container to boot up. In the second attempt, the container completed the booting up and also got the IP address.

That’s it! LXD is up and running, and we successfully created a container. See these instructions on how to test the container with a Web server.