Nov 15 2017

New Firefox stats, how will they change?

Firefox 57 has been released and it has quite a few features that make it a very compelling browser to use. Most importantly, Firefox 57 has a new browser engine that makes the best use of a multi-core/multi-threaded CPU.

Therefore, let’s write down the browser market share stats of today, and compare again in November 2018.

Source: NetMarketShare, Desktop Browser Market Share, October 2017

 

Source: StatCounter, Desktop Browser Market Share Worldwide, Oct 2016 – Oct 2017

Chrome is at 63.6%, Firefox is at 13.04%.

Source: Wikipedia, Usage share of web browsers Summary Table

 

Source: Statista, Market share held by leading desktop internet browsers in the United States from January 2015 to November 2017

Permanent link to this article: https://blog.simos.info/new-firefox-stats-how-will-they-change/

Nov 07 2017

How to use Sysdig and Falco with LXD containers

Sysdig (.org) is an open-source container troubleshooting tool and it works by capturing system calls and events directly from the Linux kernel.

When you install Sysdig, it adds a new kernel module that it uses to collect all those system calls and events. That is, compared to other tools like strace, lsof and htop, it gets the data directly from the kernel and not from /proc. In terms of functionality, it is a single tool that can do what strace + tcpdump + htop + iftop + lsof + wireshark do together.

An added benefit of Sysdig is that it understands Linux Containers (since 2015). Therefore, it is quite useful when we want to figure out what is going on in our LXD containers.

Once we get used to Sysdig, we can venture to the companion tool called Falco, a tool for container security. Both are GPL v2 licensed, though you need to sign a CLA in order to contribute to the projects (hosted on Github).

Installing Sysdig

We are installing Sysdig on the host (where LXD is running) and not in any of our containers. In this way, we have full visibility of the whole system.

You can get the installation instructions at https://www.sysdig.org/install/ which essentially amount to a single curl | sh command:

curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

Ubuntu and Debian already have a version of Sysdig, however it is a bit older than the one you get from the command above. Currently, the version in the universe repository is 0.8, while from the command above you get 0.19.

Running Sysdig

If we run sysdig without any filters, it will show us all system messages and events. It’s a never-ending waterfall, and you need to Ctrl+C to stop it.

We can run instead the Curses version called csysdig:

You can go through the examples at https://github.com/draios/sysdig/wiki/Sysdig-Examples In this post, we look into the section that relates to containers.

Specifically,

View the list of containers running on the machine and their resource usage

sudo csysdig -vcontainers

There are six LXD containers, though we do not see their names. The LXD container names are shown under a column further to the right, therefore we would need to use the arrow keys to move to the right. Let’s select the container we are interested in, and press Enter.

We selected the container guiapps and here are the processes inside this container.

View the list of processes with container context

sudo csysdig -pc

This command shows all the container processes together.  That is, they have the container context.

View the CPU usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_cpu container.name=guiapps

Here we switch from csysdig to sysdig. An issue is that these two tools do not have the same parameters.

We have a container called guiapps and we asked sysdig to show the CPU usage of the processes, sorted. The container is idle, therefore all are 0%.

View the network bandwidth usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_net container.name=guiapps

Here it shows the current network traffic inside the container, sorted by traffic. If there is no traffic, the list is empty. Therefore, it is just good to give you an indication of what is happening.

 

View the top files in terms of I/O bytes inside the guiapps container

sudo sysdig -pc -c topfiles_bytes container.name=guiapps

View the top network connections inside the guiapps container

sudo sysdig -pc -c topconns container.name=guiapps

The output is similar to tcpdump, showing the IP addresses of source and destination.

Show all the interactive commands executed inside the guiapps container

sudo sysdig -pc -c spy_users container.name=guiapps

The output looks like this,

29756 17:10:57 root@guiapps) groups 
29756 17:10:57 root@guiapps) /bin/sh /usr/bin/lesspipe
 29756 17:10:57 root@guiapps) basename /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dirname /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dircolors -b
29756 17:11:07 root@guiapps) ls --color=auto
29756 17:11:24 root@guiapps) ping 8.8.8.8
29756 17:11:38 root@guiapps) ifconfig

The commands in italics are the commands that were recorded when running lxc exec. The rest are the commands I typed in the container (ls, ping 8.8.8.8, ifconfig and finally exit which does not get shown). Commands that come from the shell (like pwd, exit) are not visible since they do not execv some command.

Installing Falco

We have already installed Sysdig using the curl | sh method that added their repository. Therefore, to install Falco, we just need to

sudo apt-get install falco

Falco needs its own kernel module,

$ sudo dkms status
falco, 0.8.1, 4.10.0-38-generic, x86_64: installed
sysdig, 0.19.1, 4.10.0-38-generic, x86_64: installed

Upon installation, it adds some default rules in /etc/falco. These rules are about application behaviour that Falco will be inspecting and reporting to us. Therefore, Falco is ready to go. If we need something specific, we would need to add our rules in /etc/falco/falco_rules.local.yaml

Running Falco

Let’s run falco.

$ sudo falco container.name = guiapps
Tue Nov 7 17:41:41 2017: Falco initialized with configuration file /etc/falco/falco.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.local.yaml
17:41:52.933145895: Notice Unexpected setuid call by non-sudo, non-root program (user=nobody parent=<NA> command=lxd forkexec guiapps /var/lib/lxd/containers /var/log/lxd/guiapps/lxc.conf -- env USER=root HOME=/root TERM=xterm-256color PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin LANG=C.UTF-8 -- cmd sudo --user ubuntu --login uid=root)
17:41:52.938956110: Notice A shell was spawned in a container with an attached terminal (user=user guiapps (id=guiapps) shell=bash parent=sudo cmdline=bash terminal=34842)
17:41:58.583366422: Notice A shell was spawned in a container with an attached terminal (user=root guiapps (id=guiapps) shell=bash parent=su cmdline=bash terminal=34842)

We specified that we want to focus only on the container with the name guiapps.

The lines in italics are the startup lines. The two lines on 17:41:52 are the result of the lxc exec guiapps -- sudo --user ubuntu --login. The next line is the result of sudo su.

Conclusion

Both Sysdig (troubleshooting) and Falco (monitoring) are useful tools that are aware of containers. Their default use is quite handy to troubleshoot and monitor containers. These tools have more much features and the ability to add scripts to them (called chisels) to do more even more advanced stuff.

For more resources, check their respective home page.

Permanent link to this article: https://blog.simos.info/how-to-use-sysdig-and-falco-with-lxd-containers/

Nov 06 2017

How to install a Node.js app in a LXD container

Update #1: Added working screenshot and some instructions.

Update #2: Added instructions on how to get the app to autostart through systemd.

Installing a Node.js app on your desktop Linux computer is a messy affair, since you need to add a new repository and install lots of additional packages.

The alternative to messing up your desktop Linux, is to create a new LXD (LexDee) container, and install into that. Once you are done with the app, you can simply delete the container and that’s it. No trace whatsoever and you keep your clean Linux installation.

First, see how to setup LXD on your Ubuntu desktop. There are extra resources if you want to install LXD on other distributions.

Second, for this post we are installing chalktalk, a Node.js app that turns your browser into an interactive blackboard. Nothing particular about Chalktalk, it just appeared on HN and it looks interesting.

Here is what we are going to see today,

  1. Create a LXD container
  2. Install Node.js in the LXD container
  3. Install Chalktalk
  4. Testing Chalktalk

Creating a LXD container

Let’s create a new LXD container with Ubuntu 16.04 (Xenial, therefore ubuntu:x), called mynodejs. Feel free to use something more descriptive, like chalktalk.

$ lxc launch ubuntu:x mynodejs
Creating mynodejs
Starting mynodejs
$ lxc list -c ns4
+----------+---------+----------------------+
| NAME     | STATE   | IPV4                 |
+----------+---------+----------------------+
| mynodejs | RUNNING | 10.52.252.246 (eth0) |
+---------------+----+----------------------+

Note down the IP address of the container. We need it when we test Chalktalk at the end of this howto.

Then, we get  a shell into the LXD container.

$ lxc exec mynodejs -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mynodejs:~$

We executed in the mynodejs container the command sudo –login –user ubuntu. It gives as a login shell for the non-root default user ubuntu, that is always found in Ubuntu container images.

Installing Node.js in the LXD container

Here are the instructions to install Node.js 8 on Ubuntu.

ubuntu@mynodejs:~$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
## Installing the NodeSource Node.js v8.x repo...
...
## Run `apt-get install nodejs` (as root) to install Node.js v8.x and npm

ubuntu@mynodejs:~$ sudo apt-get install -y nodejs
...
Setting up python (2.7.11-1) ...
Setting up nodejs (8.9.0-1nodesource1) ...
ubuntu@mynodejs:~$ sudo apt-get install -y build-essential
...
ubuntu@mynodejs:~$

The curl | sh command makes you want to install in a container rather than on your desktop. Just saying. We also install the build-essential meta-package because it is needed when you install packages on top of Node.js.

Installing Chalktalk

We follow the installation instructions for Chalktalk to clone the repository. We use depth=1 to get a shallow copy (18MB) instead of the full repository (100MB).

ubuntu@mynodejs:~$ git clone https://github.com/kenperlin/chalktalk.git --depth=1
Cloning into 'chalktalk'...
remote: Counting objects: 195, done.
remote: Compressing objects: 100% (190/190), done.
remote: Total 195 (delta 5), reused 51 (delta 2), pack-reused 0
Receiving objects: 100% (195/195), 8.46 MiB | 8.34 MiB/s, done.
Resolving deltas: 100% (5/5), done.
Checking connectivity... done.
ubuntu@mynodejs:~$ cd chalktalk/server/
ubuntu@mynodejs:~/chalktalk/server$ npm install
> bufferutil@1.2.1 install /home/ubuntu/chalktalk/server/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /home/ubuntu/chalktalk/server/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
npm WARN chalktalk@0.0.1 No description
npm WARN chalktalk@0.0.1 No repository field.
npm WARN chalktalk@0.0.1 No license field.

added 5 packages in 3.091s
ubuntu@mynodejs:~/chalktalk/server$ cd ..
ubuntu@mynodejs:~/chalktalk$

Trying out Chalktalk

Let’s run the app, using the following command (you need to be in the chalktalk directory):

ubuntu@mynodejs:~/chalktalk$ node server/main.js 
HTTP server listening on port 11235

Now, we are ready to try out Chalktalk! Use your favorite browser and visit http://10.52.252.246/11235 (replace according the IP address of your container).

You are presented with a blackboard! You use the mouse to sketch objects, then click on your sketch to get Chalktalk to try to identify it and create the actually responsive object.

It makes more sense if you watch the following video,

And this is how you can cleanly install Node.js into a LXD container. Once you are done testing, you can delete the container and it’s gone.

 

Update #1:

Here is an actual example. The pendulum responds to the mouse and we can nudge it.

The number can be incremented or decremented using the mouse; do an UP gesture to increment, and a DOWN gesture to decrement. You can also multiply/divide by 10 if you do a LEFT/RIGHT gesture.

Each type of object has a corresponding sketch in Chalktalk. In the source there is a directory with the sketches, with the ability to add new sketches.

 

Update #2:

Let’s see how to get this Node.js app to autostart when the LXD container is started. We are going to use systemd to control the autostart feature.

First, let’s create a script, called chalktalk-service.sh, that starts the Node.js app:

ubuntu@mynodejs:~$ pwd
/home/ubuntu
ubuntu@mynodejs:~$ cat chalktalk-service.sh 
#!/bin/sh
cd /home/ubuntu/chalktalk
/usr/bin/node /home/ubuntu/chalktalk/server/main.js
ubuntu@mynodejs:~$

We have created a script instead of running the command directly. The reason is that chalktalk uses relative pathes and we need to chdir to the appropriate directory first so it works. You may want to contact the author to attend to this.

Then, we create a service file for Chalktalk.

ubuntu@mynodejs:~$ cat /lib/systemd/system/chalktalk.service 
[Unit]
Description=Chalktalk - your live blackboard
Documentation=https://github.com/kenperlin/chalktalk/wiki
After=network.target
After=network-online.target

[Service]
Type=simple
User=ubuntu
Group=ubuntu
ExecStart=/home/ubuntu/chalktalk-service.sh
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=chalktalk

[Install]
WantedBy=multi-user.target

ubuntu@mynodejs:~$

We have configured to get Chalktalk to autostart once the network is up and online. The service gets to become user ubuntu, then run the command. Any output or error goes to syslog, using the chalktalk syslog identifier.

Let’s get Systemd to learn about this new service file.

ubuntu@mynodejs:~$ sudo systemctl daemon-reload
ubuntu@mynodejs:~$

Let’s start the Chalktalk service.

ubuntu@mynodejs:~$ sudo systemctl start chalktalk
ubuntu@mynodejs:~$

Now we verify whether it works.

Let’s restart the container and test whether the Chalktalk actually autostarted!

ubuntu@mynodejs:~$ logout

myusername@mycomputer /home/myusername:~$ lxc restart mynodejs
myusername@mycomputer /home/myusername:~$

That’s it!

Permanent link to this article: https://blog.simos.info/how-to-install-a-node-js-app-in-a-lxd-container/

Nov 01 2017

How to run Docker in a LXD container

We are running Ubuntu 16.04 and LXD 2.18 (from the lxd-stable PPA). Let’s get Docker (docker-ce) to run in a container!

General instructions on running Docker (docker.io, from the Ubuntu repositories) in an LXD container can be found at LXD 2.0: Docker in LXD [7/12].

 

First, let’s launch a LXD container in a way that will make it suitable to run Docker in it.

$ lxc launch ubuntu:x docker -c security.nesting=true
Creating docker
Starting docker
$

Here, docker is just the name of the LXD Container. The security.nesting feature is needed because our Docker installation will be a container inside the LXD container.

 

Second, let’s install Docker CE on Ubuntu.

ubuntu@docker:~$ sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...
Reading package lists... Done
ubuntu@docker:~$ sudo apt-get install \
> apt-transport-https \
> ca-certificates \
> curl \
> software-properties-common
Reading package lists... Done
...
The following additional packages will be installed:
 libcurl3-gnutls
The following packages will be upgraded:
 curl libcurl3-gnutls
2 upgraded, 0 newly installed, 0 to remove and 20 not upgraded.
...
ubuntu@docker:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
ubuntu@docker:~$ sudo apt-key fingerprint 0EBFCD88
pub 4096R/0EBFCD88 2017-02-22
 Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

ubuntu@docker:~$ sudo add-apt-repository \
> "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
> $(lsb_release -cs) \
> stable"
ubuntu@docker:~$ sudo apt-get update
...
ubuntu@docker:~$ sudo apt-get install docker-ce
...
The following NEW packages will be installed:
 aufs-tools cgroupfs-mount docker-ce libltdl7
0 upgraded, 4 newly installed, 0 to remove and 20 not upgraded.
Need to get 21.2 MB of archives.
After this operation, 100 MB of additional disk space will be used.
...
ubuntu@docker:~$

 

Third, let’s test that Docker is working, by running the hello-world image.

ubuntu@docker:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
5b0f327be733: Pull complete 
Digest: sha256:07d5f7800dfe37b8c2196c7b1c524c33808ce2e0f74e7aa00e603295ca9a0972
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
 executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
 to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

ubuntu@docker:~$

 

Finally, let’s go full inception by running Ubuntu in a Docker container inside an Ubuntu LXD container on Ubuntu 16.04.

ubuntu@docker:~$ sudo docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
ae79f2514705: Pull complete 
5ad56d5fc149: Pull complete 
170e558760e8: Pull complete 
395460e233f5: Pull complete 
6f01dc62e444: Pull complete 
Digest: sha256:506e2d5852de1d7c90d538c5332bd3cc33b9cbd26f6ca653875899c505c82687
Status: Downloaded newer image for ubuntu:latest
root@cc7fe5598b2e:/# exit

 

Before we finish, let’s check what docker info says.

ubuntu@docker:~$ sudo docker info
Containers: 2
 Running: 0
 Paused: 0
 Stopped: 2
Images: 2
Server Version: 17.09.0-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
 apparmor
 seccomp
 Profile: default
Kernel Version: 4.10.0-37-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 4.053 GiB
Name: docker
ID: KEA5:KU1N:MM6U:2ROA:UHJ5:3VRR:3B6C:ZWF4:KTOB:V6OT:ROLI:CAT3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
ubuntu@docker:~$

An issue here is that the Docker storage driver is vfs, instead of aufs or overlay2. The Docker package in the Ubuntu repositories (name: docker.io) has modifications to make it work better with LXD. There are a few questions here on how to get a different storage driver and a few things are still unclear to me.

Permanent link to this article: https://blog.simos.info/how-to-run-docker-in-a-lxd-container/

Oct 31 2017

Online course about LXD containers

If you want to learn about LXD, there is a good online course at LinuxAcademy (warning: referral).

It is the LXC/LXD Deep Dive online course by Chad Miller.

Some of the course details of LXC/LXD Deep Dive course on LinuxAcademy (total time: 3h)

Some of the course details of LXC/LXD Deep Dive course on LinuxAcademy (total time: 3h)

Here is a review of this LXC/LXD online course by bmullan.

There is a 7-day unlimited trial on LinuxAcademy, then $29 per month (bank card or Paypal). If you aim for this, you can take the course for free!

Permanent link to this article: https://blog.simos.info/online-course-about-lxd-containers/

Sep 26 2017

How to set up LXD on Packet.net (baremetal servers)

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
stepping : 8
microcode : 0x122
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat
bugs :
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common
 initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0
 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0
 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0
 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
 libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal
 libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2
 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv
 tcpdump udev unattended-upgrades
59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.3 MB of archives.
After this operation, 77.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ...

Configuration file '/etc/cloud/cloud.cfg'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
 What would you like to do about it ? Your options are:
 Y or I : install the package maintainer's version
 N or O : keep your currently-installed version
 D : show the differences between the versions
 Z : start a shell to examine the situation
 The default action is to keep your current version.
*** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N
Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4
root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd
lxd:
      Installed: (none)
      Candidate: 2.0.10-0ubuntu1~16.04.1
      Version table:
              2.0.10-0ubuntu1~16.04.1 500
                      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
              2.0.0-0ubuntu4 500
                      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd
...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y

root@lxd:~# ssh myusername@localhost
Permission denied (publickey).
root@lxd:~# cp -R ~/.ssh/ ~myusername/
root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

**************************************************************************
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on Amazon EC2 or one of cloud-init's known platforms that #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
# #
# If you are seeing this message, please file a bug against #
# cloud-init at #
# https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid #
# Make sure to include the cloud provider your instance is #
# running on. #
# #
# For more information see #
# https://bugs.launchpad.net/bugs/1660385 #
# #
# After you have filed a bug, you can disable this warning by #
# launching your instance with the cloud-config below, or #
# putting that content into #
# /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg #
# #
# #cloud-config #
# datasource: #
# Ec2: #
# strict_id: false #
**************************************************************************

Disable the warnings above by:
 touch /home/myusername/.cloud-warnings.skip
or
 touch /var/lib/cloud/instance/warnings/.skip
myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 136G 1.1G 128G 1% /
myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs 
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd 
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=27]: 100
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

LXD has been successfully configured.
myusername@lxd:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
myusername@lxd:~$

Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (47.99MB/s) 
Starting web 
myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS
+------+---------+---------------------+------------+-----------+
| NAME | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------------+-----------+
| web  | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+------------+-----------+
myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on Packet.net in an LXD container!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx on Packet.net in an LXD container!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig 
bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8 
      inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254
      inet6 addr: 2604:1380:2000:600::1/127 Scope:Global
      inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      RX packets:144216 errors:0 dropped:0 overruns:0 frame:0
      TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@lxd:~$

Let’s test it out!

That’s it!

Permanent link to this article: https://blog.simos.info/how-to-set-up-lxd-on-packet-net-baremetal-servers/

Sep 26 2017

How to use Ubuntu and LXD on Alibaba Cloud

Alibaba Cloud is like Amazon Web Services as they offer quite similar cloud services. They are part of the Alibaba Group, a huge Chinese conglomerate. For example, the retailer component of the Alibaba Group is now bigger than Walmart. Here, we try out the cloud services.

The main reason to select Alibaba Cloud is to get a server running inside China. They also have several data centers outside China, but inside China it is mostly Alibaba Cloud. To get a server running inside mainland China though, you need to go through a registration process where you submit photos of your passport. We ain’t have time for that, therefore we select the closest data center to China, Hong Kong.

Creating an account on Alibaba Cloud

Click to create an account on Alibaba Cloud (update: no referral link). You get $300 credit to use within two months, and up to $50 of that credit can go towards launching virtual private servers. Actually, make that account with the referral now, before continuing with this section below..

When creating the account, there is either the option to verify your email or phone number. Let’s do the email verification.

Let’s check our mails. Where is that email from Alibaba Cloud? Nothing arrived!?!

The usability disaster is almost evident. When you get to this page about the Verification, the text says We need to verify your email. Please input the number you receive. Alibaba Cloud did not already send that email to us. We need to first click on Send to get it to send that email. The text should have said instead something like To use email verification, click Send below, then input the numbercode you have received.

You can pay Alibaba Cloud using either a bank card or Paypal. Let’s try Paypal! Actually, to make use of the $300 credit, it has to be a bank card instead.

We have added a bank card. This bank card has to go through a type verification. Alibaba Cloud will make a small debit (to be refunded later) and you can input either the transaction amount or the transaction code (see screenshot) below in order to verify that you do have access to your bank card.

After a couple of days, you get worried because there is no transaction with the description INTL*?????.ALIYUN.COM at your online banking. What went wrong? And what is this weird transaction with a different description in my bank statement?

Description: INTL*175 LUXEM LU ,44

Debit amount: 0.37€

What is LUXEM, a municipality in Germany, doing on my bank statement? Let’s hope that the processor for Alibaba in Europe is LUXEM, not ALIYUN.

Let’s try as transaction code the number 175. Did not work. Four more tries remaining.

Let’s try the transaction amount, 0.37€. Of course, it did not work. It says USD, nor EURO! Three tries remaining.

Let’s google a bit, Add a payment method documentation on Alibaba Cloud talks only about dollars. A forum post about non-dollar currencies says:

I did not get an authorization charge, therefore there is no X.

Let’s do something really crazy:

We type 0.44 as the transaction amount. IT WORKED!

In retrospect, there is a reference on ,44 in the description, who would have thought that this undocumented info might refer to the dollar amount.

After a week, the micro transaction of 0.37€ was not reimbursed. What’s more, I was also charged with a 2.5€ commission which I am not getting back either.

We are now ready to use the $300 Free Credit!

Creating a server on Alibaba Cloud

When trying to create a server, you may encounter this website, with a hostname YUNDUN.console.aliyun.com. If you get that, you are in the wrong place. You cannot add your SSH key here, nor do you create a server.

Instead, it should say ECS, Elastic Compute Service.

Here is the full menu for ECS,

Under Networks & Security, there is Key Pairs. Let’s add there the SSH public key, not the whole key pair.

First of all, we need to select the appropriate data center. Ok, we change to Hong Kong which is listed in the middle.

But how do we add our own SSH key? There is only an option to Create Key Pair!?! Well, let’s create a pair.

Ah, okay. Although the page is called Create Key Pair, we can actually Import an Existing Key Pair.

Now, click back to Elastic Computer S…/Overview, which shows each data center.

If we were to try to create a server in Mainland China, we get

In that case, we would need to send first a photo of our passport or our driver’s license.

Let’s go back, and select Hong Kong.

We are ready to configure our server.

There is an option of either a Starter Package or an Advanced Purchase. The Starter Package is really cool, you can get a server for only $4.5. But the find print for the $300 credit says that you cannot use the Starter Package here.

So, Advanced Purchase it will be.

There are two pricing models, Subscription and Pay As You Go. Subscription means that you pay monthly, Pay As You Go means that you pay hourly. We go for Subscription.

We select the 1-core, 1GB instance, and we can see the price at $12.29. We also pay separately for the Internet traffic. The cost is shown on an overlay, we still have more options to select before we create the server.

We change the default Security Group to the one shown above. We want our server to be accessible from outside on ports 80 and 443. Also port 22 is added by default, along with the port 3389 (Remote Desktop in Windows).

We select Ubuntu 16.04.  The order of the operating systems is a bit weird. Ideally, the order should reflect the popularity.

There is an option for Server Guard. Let’s try it since it is free. (it requires to install some closed-source package in our Linux. Eventually I did not try it).

The Ultra Cloud Disk is a network share and it is included in the earlier price. The other option would be to select an SSD. It is nice that we can add up to 16 disks to our server.

We are ready to place the order. It correctly shows $0 and mentions the $50 credit. We select not to auto renew.

Now we pay the $0.

And that’s how we start a server. We have received an email with the IP address but can also find the public IP address from the ECS settings.

Let’s have a look at the IP block for this IP address.

ffs.

How to set up LXD on an Alibaba server

First, we SSH to the server. The command looks like ssh root@_public_ip_address_

It looks like real Ubuntu, with real Ubuntu Linux kernel. Let’s update.

root@iZj6c66d14k19wi7139z9eZ:~# apt update
Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease [247 kB]
Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease

...
Get:45 http://mirrors.aliyun.com/ubuntu xenial-security/universe i386 Packages [147 kB] 
Get:46 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [89.8 kB] 
Fetched 40.8 MB in 24s (1682 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
105 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@iZj6c66d14k19wi7139z9eZ:~#

We upgraded (apt upgrade) and there was a kernel update. We restarted (shutdown -r now) and the newly updated Ubuntu has the updated kernel. Nice!

Let’s check /proc/cpuinfo,

root@iZj6c66d14k19wi7139z9eZ:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
stepping : 2
microcode : 0x1
cpu MHz : 2494.224
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs :
bogomips : 4988.44
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

root@iZj6c66d14k19wi7139z9eZ:/proc#

How much free space from the 40GB disk?

root@iZj6c66d14k19wi7139z9eZ:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1   40G 2,2G 36G 6% /
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s add a non-root user.

root@iZj6c66d14k19wi7139z9eZ:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] 
root@iZj6c66d14k19wi7139z9eZ:~#

Is LXD already installed?

root@iZj6c66d14k19wi7139z9eZ:~# apt policy lxd
lxd:
 Installed: (none)
 Candidate: 2.0.10-0ubuntu1~16.04.2
 Version table:
     2.0.10-0ubuntu1~16.04.2 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages
         100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial/main amd64 Packages
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s install LXD.

root@iZj6c66d14k19wi7139z9eZ:~# apt install lxd

Now, we can add our user account myusername to the groups sudo, lxd.

root@iZj6c66d14k19wi7139z9eZ:~# usermod -a -G lxd,sudo myusername
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s copy the SSH public key from root to the new non-root account.

root@iZj6c66d14k19wi7139z9eZ:~# cp -R /root/.ssh ~myusername/
root@iZj6c66d14k19wi7139z9eZ:~# chown -R myusername:myusername ~myusername/.ssh/
root@iZj6c66d14k19wi7139z9eZ:~#

Now, log out and log in as the new non-root account.

$ ssh myusername@IP.IP.IP.IP
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-96-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to Alibaba Cloud Elastic Compute Service !

myusername@iZj6c66d14k19wi7139z9eZ:~$

We are going to install the ZFS utilities so that LXD can use ZFS as a storage backend.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo apt install zfsutils-linux
...myusername@iZj6c66d14k19wi7139z9eZ:~$

Now, we can configure LXD. From before, the server had about 35GB free. We are allocating 20GB of that for LXD.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo lxd init
sudo: unable to resolve host iZj6c66d14k19wi7139z9eZ
[sudo] password for myusername:  ********
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=15]: 20
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket

LXD has been successfully configured.
myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
Generating a client certificate. This may take a minute…
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+——+——-+——+——+——+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+——-+——+——+——+———–+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Okay, we can create now our first LXD container. We are creating just a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (6.70MB/s) 
Starting web 
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s see the container,

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web | RUNNING | 10.35.87.141 (eth0) | | PERSISTENT | 0 |
+------+---------+---------------------+------+------------+-----------+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Nice. We get into the container and install a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc exec web -- sudo --login --user ubuntu

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We executed into the web container the command sudo –login –user ubuntu. The container has a default non-root account ubuntu.

ubuntu@web:~$ sudo apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease 
...
Reading state information... Done
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html 
ubuntu@web:~$ logout
myusername@iZj6c66d14k19wi7139z9eZ:~$ curl 10.35.87.141
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx running in an LXD container on Alibaba Cloud!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx running in an LXD container on Alibaba Cloud!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@iZj6c66d14k19wi7139z9eZ:~$

Obviously, the web server in the container is not accessible from the Internet. We need to do something like add iptables rules to forward appropriately the connection.

Alibaba Cloud gives two IP address per server. One is the public IP address and the other is a private IP address (172.[16-31].*.*). The eth0 interface of the server has that private IP address. This information is important for iptables below.

myusername@iZj6c66d14k19wi7139z9eZ:~$ PORT=80 PUBLIC_IP=my172.IPAddress CONTAINER_IP=10.35.87.141 sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s load up our site using the public IP address from our own computer:

And that’s it!

Conclusion

Alibaba Cloud is yet another provider for cloud services. They are big in China, actually the biggest in China. They are trying to expand to the rest of the world. There are several teething problems, probably arising from the fact that the main website is in Mandarin and there is no infrastructure for immediate translation to English.

On HN there has been a sort of relaunch a few of months ago. It appears there is interest from them to get international users. What they need is people to attend immediately to issues as they are discovered.

If you want to learn more about LXD, see https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

 

Update #1

After a day of running a VPS on Alibaba Cloud, I received this email.

From: Alibaba Cloud
Subject: 【Immediate Attention Needed】Alibaba Cloud Fraud Prevention

We have detected a security risk with the card you are using to make purchases. In order to protect your account, please provide your account ID and the following information within one working day via your registered Alibaba Cloud email to compliance_support@aliyun.com for further investigation. 

If you are using a credit card as your payment method, please provide the following information directly. Please provide clear copies of: 

1. Any ONE of the following three forms of government-issued photo identification for the credit card holder or payment account holder of this Alibaba Cloud account: (i) National identification card; (ii) Passport; (iii) Driver's License. 
2. A clear copy of the front side of your credit card in connection with this Alibaba Account; (Note: For security reasons, we advise you to conceal the middle digits of your card number. Please make sure that the card holder's name, card issuing bank and the last four digits of the card number are clearly visible). 
3. A clear copy of your card's bank statement. We will process your case within 3 working days of receiving the information listed above. NOTE: Please do not provide information in this ticket. All the information needed should be sent to this email compliance_support@aliyun.com.

If you fail to provide all the above information within one working day , your instances will be shut down. 

Best regards, 

Alibaba Cloud Customer Service Center

What this means, is that update #2 has to happen now.

 

Update #2

Newer versions of LXD have a utility called lxd-benchmark. This utility spawns, starts and stops containers, and can be used to have an idea how efficient a server may be. I suppose primarily it is used to figure out if there is a regression in the LXD code. Let see it anyway in action here, the clock is ticking.

The new LXD is in a PPA at https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Let’s install it on Alibaba Cloud.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
sudo apt updatesudo apt upgrade                   # Now LXD will be upgraded.sudo apt install lxd-tools         # Now lxd-benchmark will be installed.

Let’s see the options for lxd-benchmark.

Usage: lxd-benchmark spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT]
 lxd-benchmark start [--parallel=COUNT]
 lxd-benchmark stop [--parallel=COUNT]
 lxd-benchmark delete [--parallel=COUNT]

--count (= 100)
 Number of containers to create
 --freeze (= false)
 Freeze the container right after start
 --image (= "ubuntu:")
 Image to use for the test
 --parallel (= -1)
 Number of threads to use
 --privileged (= false)
 Use privileged containers
 --report-file (= "")
 A CSV file to write test file to. If the file is present, it will be appended to.
 --report-label (= "")
 A label for the report entry. By default, the action is used.
 --start (= true)
 Start the container after creation

First, we need to spawn new containers that we can later start, stop or delete. Ideally, I would expect the terminology to be launch instead of spawn, to keep in sync with the existing container management commands.

Second, there are defaults for each command as shown above. There is no indication yet as to how much RAM you need to spawn the default 100 containers. Obviously it would be more than the 1GB RAM we have on this server. Regarding the disk space, that would be fine because of copy-on-write with ZFS; any newly created LXD container does not use up additional space as they all are based on the space of the first container. Perhaps after a day when unattended-upgrades kicks in, each container would use up some space for any required security updates that get automatically applied.

Let’s try out with 3 containers. We have stopped and deleted the original web container that we have created in this tutorial (lxc stop web ; lxc delete web).

$ lxd-benchmark spawn --count 3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 3
 Batch size: 1
 Remainder: 0

[Sep 27 17:31:41.074] Importing image into local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Batch processing start
[Sep 27 17:32:37.614] Processed 1 containers in 24.790s (0.040/s)
[Sep 27 17:32:42.611] Processed 2 containers in 29.786s (0.067/s)
[Sep 27 17:32:49.110] Batch processing completed in 36.285s
$ lxc list --columns ns4tS
+-------------+---------+---------------------+------------+-----------+
| NAME        | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+-------------+---------+---------------------+------------+-----------+
| benchmark-1 | RUNNING | 10.35.87.252 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-2 | RUNNING | 10.35.87.115 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-3 | RUNNING | 10.35.87.72 (eth0)  | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| web         | RUNNING | 10.35.87.141 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
$

We created three extra containers, named benchmark-?, and got them started. There were launched in three batches, which means that one was started after another, not in parallel.

The total time on this server, when the storage backend is zfs, was 36.2 seconds. It is not clear what the numbers in the parenthesis mean at Processed 1 containers in 18.770s (0.053/s).

The total time on this server, when the storage backend was dir, was 68.6 instead.

Let’s stop them!

$ lxd-benchmark stop
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:06:08.822] Stopping 3 containers
[Sep 27 18:06:08.822] Batch processing start
[Sep 27 18:06:09.680] Processed 1 containers in 0.858s (1.165/s)
[Sep 27 18:06:10.543] Processed 2 containers in 1.722s (1.162/s)
[Sep 27 18:06:11.406] Batch processing completed in 2.584s
$

With dir, it was around 2.4 seconds.

And then delete them!

$ lxd-benchmark delete
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:07:12.020] Deleting 3 containers
[Sep 27 18:07:12.020] Batch processing start
[Sep 27 18:07:12.130] Processed 1 containers in 0.110s (9.116/s)
[Sep 27 18:07:12.224] Processed 2 containers in 0.204s (9.814/s)
[Sep 27 18:07:12.317] Batch processing completed in 0.297s
$

With dir, it was 2.5 seconds.

Let’s create three containers in parallel.

$ lxd-benchmark spawn --count=3 --parallel=3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 3
 Remainder: 0

[Sep 27 18:11:01.570] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:11:01.570] Batch processing start
[Sep 27 18:11:11.574] Processed 3 containers in 10.004s (0.300/s)
[Sep 27 18:11:11.574] Batch processing completed in 10.004s
$

With dir, it was 58.7 seconds.

Let’s push it further and try to hit some memory limits! First, we delete all, and launch 5 in parallel.

$ lxd-benchmark spawn --count=5 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 5
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 5
 Remainder: 0

[Sep 27 18:13:11.171] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:13:11.172] Batch processing start
[Sep 27 18:13:33.461] Processed 5 containers in 22.290s (0.224/s)
[Sep 27 18:13:33.461] Batch processing completed in 22.290s
$

So, 5 containers can start in 1GB of RAM, in just 22 seconds.

We also tried the same with the dir storage backend, and got

[Sep 27 17:24:16.409] Batch processing start
[Sep 27 17:24:54.508] Failed to spawn container 'benchmark-5': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-5/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:11.129] Failed to spawn container 'benchmark-3': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-3/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:35.906] Processed 5 containers in 79.496s (0.063/s)
[Sep 27 17:25:35.906] Batch processing completed in 79.496s

Out of the five containers, it managed to create 3 (No 1, 3, 4). The reason is that unsquashfs needs to run to expand an image, and that process uses a lot of memory. When using zfs, the same process probably does not need that much memory.

Let’s delete the five containers (storage backend: zfs):

[Sep 27 18:18:37.432] Batch processing completed in 5.006s

Let’s close the post with

$ lxd-benchmark spawn --count=10 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 10
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 2
 Batch size: 5
 Remainder: 0

[Sep 27 18:19:44.706] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:19:44.706] Batch processing start
[Sep 27 18:20:07.705] Processed 5 containers in 22.998s (0.217/s)
[Sep 27 18:20:57.114] Processed 10 containers in 72.408s (0.138/s)
[Sep 27 18:20:57.114] Batch processing completed in 72.408s

We launched 10 containers in two batches of five containers each. The lxd-benchmark command completed successfully, in just 72 seconds. However, after the command completed, each container would start up, get an IP and get working. We hit the memory limit when the second batch of five containers where starting up. The network monitor on the Alibaba Cloud management console shows 100% CPU utilization, and it is not possible to access the server over SSH. Let’s delete the server from the management console and wind down this trial of Alibaba Cloud.

lxd-benchmark is quite useful and can be used to get practical understanding as to how many containers can make it on a server and much more.

Update #3

I just restarted the server from the management console and connected using SSH.

Here are the ten containers from Update #2,

$ lxc list --columns ns4
+--------------+---------+------+
| NAME         | STATE   | IPV4 |
+--------------+---------+------+
| benchmark-01 | STOPPED |      |
+--------------+---------+------+
| benchmark-02 | STOPPED |      |
+--------------+---------+------+
| benchmark-03 | STOPPED |      |
+--------------+---------+------+
| benchmark-04 | STOPPED |      |
+--------------+---------+------+
| benchmark-05 | STOPPED |      |
+--------------+---------+------+
| benchmark-06 | STOPPED |      |
+--------------+---------+------+
| benchmark-07 | STOPPED |      |
+--------------+---------+------+
| benchmark-08 | STOPPED |      |
+--------------+---------+------+
| benchmark-09 | STOPPED |      |
+--------------+---------+------+
| benchmark-10 | STOPPED |      |
+--------------+---------+------+

The containers are in the stopped state. That is, they do not consume memory. How much free memory is there?

$ free
       total  used   free shared buff/cache available
Mem: 1016020 56192 791752 2928 168076 805428
Swap:      0     0      0

About 792MB free memory.

There is not enough memory to get them all to run at the same time. It is good that they get into the stopped state when you reboot, so that you can fix.

Permanent link to this article: https://blog.simos.info/how-to-use-ubuntu-and-lxd-on-alibaba-cloud/

Jun 06 2017

Πως χρησιμοποιούμε περιέκτες LXD (LXD containers) στο Ubuntu και άλλες διανομές

Ξέρουμε για τις εικονικές μηχανές (virtual machines) όπως Virtualbox και VMWare, υπάρχουν όμως και οι περιέκτες (containers) όπως Docker και LXD (προφέρεται λεξ ντι).

Εδώ θα δούμε για τους περιέκτες LXD (LXD containers), με την υποστήριξη να είναι ήδη διαθέσιμη σε όσους έχουν Ubuntu 16.04 ή νεότερο. Για τις υπόλοιπες διανομές χρειάζεται εγκατάσταση του πακέτου LXD.

Συγκεκριμένα, σήμερα θα δούμε:

  1. Τι είναι και τι προσφέρει το LXD;
  2. Πως κάνουμε την αρχική ρύθμιση του LXD σε Ubuntu Desktop (ή Ubuntu Server);
  3. Πως δημιουργούμε τον πρώτο μας περιέκτη;
  4. Πως εγκαθιστούμε τον nginx μέσα σε περιέκτη;
  5. Ποιες είναι κάποιες από τις άλλες πρακτικές χρήσεις των περιεκτών LXD;

Για τα παρακάτω, θεωρούμε ότι έχουμε Ubuntu 16.04 ή νεότερο. Ubuntu Desktop ή Ubuntu Server είναι μια χαρά.

Τι είναι και τι προσφέρει το LXD;

Υπάρχει ο όρος περιέκτες Linux (Linux containers, LXC) που περιγράφει τη νέα δυνατότητα που προσφέρει ο πυρήνας του Linux να περιορίζεται η εκτέλεση μιας θυγατρικής διεργασίας (μέσω namespaces, cgroups) ώστε να επιτρέπεται να γίνονται μόνο όσα έχουμε δηλώσει. Με το Docker, μπορούμε να τρέξουμε (τυπικά) μια διεργασία κάτω από περιορισμούς (process container). Με το LXD όμως, μπορούμε να τρέξουμε μια ολόκληρη διανομή κάτω από περιορισμούς (machine container).

Το LXD είναι λογισμικό επόπτη (hypervisor) που επιτρέπει τον πλήρη έλεγχο του κύκλου ζωής των περιεκτών. Συγκεκριμένα,

  • επιτρέπει την αρχικοποίηση των ρυθμίσεων καθώς και του χώρου όπου θα αποθηκεύονται οι περιέκτες. Μετά την αρχικοποίηση, δεν χρειάζεται να ασχοληθούμε ξανά με αυτές τις λεπτομέρειες.
  • παρέχει αποθετήρια με έτοιμες εικόνες (images) από μια σειρά διανομών. Υπάρχει Ubuntu (από 12.04 έως 17.04, Ubuntu Core), Alpine, Debian (strech, wheezy), Fedora (22, 23, 24, 25), Gentoo, OpenSUSE, Oracle, Plamo και Sabayon. Αυτά είναι διαθέσιμα στις αρχιτεκτονικές amd64, i386, armhf, armel, powerpc, ppc64el και s390x.
  • επιτρέπει την εκκίνηση μιας εικόνας μέσα σε λίγα δευτερόλεπτα. Μια εικόνα που έχει εκκινηθεί, αποτελεί έναν περιέκτη.
  • μπορούμε να πάρουμε αντίγραφο ασφάλειας ενός περιέκτη, να τον μεταφέρουμε μέσω δικτύου σε άλλη εγκατάσταση LXD, κτλ.

Η τυπική χρήση των περιεκτών LXD είναι στο να τρέχουμε υπηρεσίες διαδικτύου όπως WordPress, με στόχο να έχουμε σε ξεχωριστό περιέκτη κάθε διαφορετικό δικτυακό τόπο. Έτσι, απομονώνουμε τις υπηρεσίες και μπορούμε να τις διαχειριστούμε καλύτερα. Σε σχέση με τις εικονικές μηχανές, οι περιέκτες LXD απαιτούν πολύ λιγότερους πόρους. Για παράδειγμα, σε υπολογιστή με Ubuntu Desktop και 4GB RAM, μπορούμε να τρέξουμε άνετα και δέκα περιέκτες LXD.

Αρχικές ρυθμίσεις του LXD

Τώρα θα ρυθμίσουμε το LXD στον υπολογιστή μας. Αν για κάποιο λόγο δεν θέλετε να το κάνετε, μπορείτε να δοκιμάσετε το LXD και μέσω διαδικτύου από τη δωρεάν υπηρεσία δοκιμής του LXD.

Θα εκτελέσουμε την εντολή lxd init ως διαχειριστές ώστε να γίνει η αρχική ρύθμιση του LXD.

$ sudo lxd init
Name of the storage backend to use (dir or zfs): dir
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> Θα ρωτήσει για ρυθμίσεις δικτύου. Αποδεχόμαστε ό,τι προταθεί και συνεχίζουμε.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

Μας ρώτησε για το storage backend (υποστήριξη αποθήκευσης) και επιλέξαμε dir. Αυτή είναι η πιο απλή επιλογή, και τα αρχεία κάθε περιέκτη θα τοποθετηθούν σε υποκατάλογο στο /var/lib/lxd/. Για πιο σοβαρή χρήση, θα επιλέγαμε zfs. Αν θέλετε να δοκιμάσετε με zfs, αποδεχτείτε ό,τι προταθεί και επιλέξτε να διαθέσετε τουλάχιστον 15GB χώρο.

Με τη ρύθμιση της γέφυρας LXD (LXD bridge), γίνεται η ρύθμιση του διαδικτύου για τους περιέκτες. Αυτό που θα γίνει, είναι ότι το LXD θα παρέχει έναν εξυπηρετητή DHCP (τον dnsmasq) για τους περιέκτες ώστε να τους αποδώσει διεθύνσεις IP τύπου 10.x.x.x και να επιτρέψει την πρόσβαση στο διαδίκτυο.

Τώρα είμαστε σχεδόν έτοιμοι να τρέξουμε εντολές lxc για τη διαχείρηση εικόνων και περιεκτών LXD. Ας επιβεβαιώσουμε ότι ο λογαριασμός χρήση μπορεί να τρέξει εντολές για το LXD. Εδώ χρειάζεται ο χρήστης μας να ανήκει την ομάδα (group) με όνομα lxd. Δηλαδή,

$ groups myusername
myusername : myusername adm cdrom sudo vboxusers lxd

Αν δεν είμασταν μέλη της ομάδας lxd, τότε θα χρειαζόταν να τρέξουμε

$ sudo usermod --append --groups lxd myusername
$ _

και μετά να αποσυνδεθούμε (log out) και να συνδεθούμε (log in) ξανά.

Πως δημιουργούμε έναν περιέκτη

Πρώτα ας τρέξουμε την εντολή που δείχνει τι περιέκτες υπάρχουν. Θα δείξει κενό.

$ lxc list
If this is your first time using LXD, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:16.04
+---------+---------+-----------------+-------------------+------------+-----------+
|  NAME   |  STATE  |      IPV4       |       IPV6        |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------+-------------------+------------+-----------+
+---------+---------+-----------------+-------------------+------------+-----------+

Όλες οι εντολές διαχείρισης περιεκτών LXD ξεκινούν με lxc και μετά ακολουθεί ένα ρήμα. Το lxc list (ρήμα: list) δείχνει τους διαθέσιμους περιέκτες.

Βλέπουμε ήδη ότι το ρήμα για να ξεκινήσουμε τον πρώτο περιέκτη, είναι το launch. Μετά ακολουθεί το όνομα του αποθετηρίου, το ubuntu:, και τέλος το αναγνωριστικό της εικόνας (16.04).

Υπάρχουν δύο διαθέσιμα αποθετήρια με εικόνες, το ubuntu: και το images:. Για να δούμε τις διαθέσιμες εικόνες στο ubuntu:, τρέχουμε

$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| x (9 more)         | 8fa08537ae51 | yes    | ubuntu 16.04 LTS amd64 (release) (20170516)     | x86_64  | 153.70MB | May 16, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...$ _

Βλέπουμε ότι το Ubuntu 16.04 έχει αρκετά ψευδώνυμα, και εκτός από το «16.04», υπάρχει και το «x» (από το xenial).

Ας χρησιμοποιήσουμε την εικόνα του Ubuntu 16.04 (ubuntu:x) για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη.

$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Starting mycontainer
$ _

Εδώ χρησιμοποιήσαμε ως μήτρα την εικόνα ubuntu:x για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη με όνομα mycontainer, που τρέχει Ubuntu 16.04.

$ lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| mycontainer | RUNNING | 10.0.180.12 (eth0)  | fd42:accb:3958:4ca6:216:57ff:f0ff:1afa (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
$ _

Και αυτός είναι ο πρώτος μας περιέκτης! Είναι σε εκτέλεση, και έχει και διευθύνση IP. Ας δοκιμάσουμε:

$ ping 10.0.180.12
PING 10.0.180.12 (10.0.180.12) 56(84) bytes of data.
64 bytes from 10.0.180.12: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 10.0.180.12: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 10.0.180.12: icmp_seq=3 ttl=64 time=0.035 ms
^C
--- 10.0.180.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.035/0.035/0.036/0.004 ms
$ _

Ας εκτελέσουμε μια εντολή μέσα στον περιέκτη!

$ lxc exec mycontainer -- uname -a
Linux mycontainer 4.8.0-53-generic #56~16.04.1-Ubuntu SMP Tue May 16 01:18:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ _

Εδώ χρησιμοποιήσαμε το ρήμα exec, που δέχεται ως παράμετρο το όνομα του περιέκτη, και μετά την εντολή που θα τρέξει μέσα στον περιέκτη. Αυτό το αποτελεί αναγνωριστικό για το φλοιό μας ώστε να σταματήσει να ψάχνει για παραμέτρους. Αν δεν βάζαμε το –, τότε ο φλοιός bash θα θεωρούσε ότι το -a είναι μια παράμετρος για την εντολή lxc και θα υπήρχε πρόβλημα.

Βλέπουμε ότι οι περιέκτες τρέχουν τον πυρήνα του συστήματός μας. Όταν εκκινούμε έναν περιέκτη, ξεκινά η εκτέλεση του λογισμικού χρήστη (user-space) μιας εικόνας διανομής. Δεν ξεκινά η εκτέλεση ενός νέου πυρήνα, με αποτέλεσμα να μοιράζονται όλοι οι περιέκτες τον ίδιο πυρήνα, ακόμα και αν ανήκουν σε διαφορετικές διανομές.

Ας δημιουργήσουμε έναν φλοιό στον περιέκτη ώστε να τρέξουμε περισσότερες εντολές!

$ lxc exec mycontainer -- /bin/bash
root@mycontainer:~# exit
$

Αυτό ήταν! Μπορούμε να τρέξουμε ό,τι θέλουμε στον περιέκτη ως διαχειριστές. Αν σβήσουμε κάτι, τότε αυτό θα σβηστεί μέσα στον περιέκτη και δεν επηρεάζει το σύστημά μας.

Οι εικόνες Ubuntu έρχονται με ένα απλό λογαριασμό με όνομα ubuntu, οπότε μπορούμε να συνδεόμαστε και με το λογαριασμό αυτό. Να πως,

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ exit
$

Αυτό που κάναμε, ήταν να τρέξουμε την εντολή sudo ώστε να γίνουμε χρήστης ubuntu, και να λάβουμε έναν φλοιό εισόδου (login).

Πως εγκαθιστούμε μια υπηρεσία δικτύου σε ένα περιέκτη LXD

Ας εγκαταστήσουμε έναν εξυπηρετητή Web στον περιέκτη.

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ sudo apt update
...ubuntu@mycontainer:~$ sudo apt install nginx
...ubuntu@mycontainer:~$ sudo lsof -i
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  231     root    6u  IPv4 141913      0t0  UDP *:bootpc 
sshd      323     root    3u  IPv4 142683      0t0  TCP *:ssh (LISTEN)
sshd      323     root    4u  IPv6 142692      0t0  TCP *:ssh (LISTEN)
nginx    1183     root    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1183     root    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
ubuntu@mycontainer:~$

Ενημερώσαμε τη λίστα πακέτων μέσα στον περιέκτη και εγκαταστήσαμε το πακέτο nginx (άλλη επιλογή: apache2). Μετά τρέξαμε την εντολή lsof -i ώστε να επιβεβαιώσουμε ότι η υπηρεσία είναι σε λειτουργία.

Βλέπουμε ότι τρέχει από προεπιλογή ο sshd. Ωστόσο χρειάζεται να βάλουμε οι ίδιοι τιμές στο ~/.ssh/authorized_keys ώστε να μπορέσουμε να συνδεθούμε. Οι λογαριασμοί root και ubuntu είναι κλειδωμένοι από προεπιλογή.

Βλέπουμε ακόμα ότι είναι σε λειτουργία και ο εξυπηρετητής Web nginx.

Και πράγματι, είναι προσβάσιμος από τον περιηγητή μας.

Εδώ κάνουμε ό,τι άλλους πειραματισμούς θέλουμε. Για πληρότητα, ας δούμε πως σταματάμε τον περιέκτη και τον σβήνουμε.

ubuntu@mycontainer:~$ exit
logout
$ lxc stop mycontainer
$ lxc delete mycontainer
$

Αυτό ήταν! Σταματήσαμε τον περιέκτη mycontainer και έπειτα τον σβήσαμε.

Πρακτικές χρήσεις περιεκτών LXD

Ας δούμε μερικές πρακτικές χρήσεις περιεκτών LXD,

  1. Θέλουμε στο φορητό μας να εγκαταστήσουμε μια υπηρεσία δικτύου αλλά δεν θέλουμε μετά να ξεμείνουν τα εγκατεστημένα πακέτα. Εγκαθιστούμε σε περιέκτη και μετά τον σταματάμε (ή σβήνουμε)
  2. Θέλουμε να δοκιμάσουμε μια παλιά εφαρμογή PHP που για κάποιο λόγο δεν τρέχει σε PHP7 (Ubuntu 16.04). Εγκαθιστούμε σε έναν περιέκτη το Ubuntu 14.04 («ubuntu:t»), οπότε θα έχει PHP 5.x.
  3. Θέλουμε να εγκαταστήσουμε μια εφαρμογή στο Wine αλλά ΔΕΝ ΘΕΛΟΥΜΕ να εγκατασταθούν όλα αυτά τα πακέτα που φέρνει το Wine. Εγκαθιστούμε το Wine σε περιέκτη LXD.
  4. Θέλουμε να εγκαταστήσουμε μια εφαρμογή γραφικού περιβάλλοντος με χρήση επιτάχυνσης υλικού για γραφικά, αλλά να μην μπλέξει με το σύστημά μας. Εγκαθιστούμε την εφαρμογή γραφικού περιβάλλοντος σε περιέκτη LXD.
  5. Έχουμε δύο λογαριασμούς Steam. Πως; Εγκαθιστούμε το Steam δύο φορές σε δύο περιέκτες.
  6. Θέλουμε να φιλοξενήσουμε πολλούς διαδικτυακούς τόπους στο VPS μας, και θέλουμε να υπάρχει διαχωρισμός μεταξύ τους. Εγκαθιστούμε κάθε δικτυακό τόπο σε ξεχωριστό περιέκτη.

Αν έχετε απορίες ή θέλετε υποστήριξη, ρωτήστε εδώ ή στις άλλες υπηρεσίες της κοινότητας Ubuntu Greece.

Permanent link to this article: https://blog.simos.info/%cf%80%cf%89%cf%82-%cf%87%cf%81%ce%b7%cf%83%ce%b9%ce%bc%ce%bf%cf%80%ce%bf%ce%b9%ce%bf%cf%8d%ce%bc%ce%b5-%cf%80%ce%b5%cf%81%ce%b9%ce%ad%ce%ba%cf%84%ce%b5%cf%82-lxd-lxd-containers-%cf%83%cf%84%ce%bf-u/

May 03 2017

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

In How to run Wine (graphics-accelerated) in an LXD container on Ubuntu we had a quick look into how to run GUI programs in an LXD (Lex-Dee) container, and have the output appear on the local X11 server (your Ubuntu desktop).

In this post, we are going to see how to

  1. generalize the instructions in order to run most GUI apps in a LXD container but appear on your desktop
  2. have accelerated graphics support and audio
  3. test with Firefox, Chromium and Chrome
  4. create shortcuts to easily launch those apps

The benefits in running GUI apps in a LXD container are

  • clear separation of the installation data and settings, from what we have on our desktop
  • ability to create a snapshot of this container, save, rollback, delete, recreate; all these in a few seconds or less
  • does not mess up your installed package list (for example, all those i386 packages for Wine, Google Earth)
  • ability to create an image of such a perfect container, publish, and have others launch in a few clicks

What we are doing today is similar to having a Virtualbox/VMWare VM and running a Linux distribution in it. Let’s compare,

  • It is similar to the Virtualbox Seamless Mode or the VMWare Unity mode
  • A VM virtualizes a whole machine and has to do a lot of work in order to provide somewhat good graphics acceleration
  • With a container, we directly reuse the graphics card and get graphics acceleration
  • The specific set up we show today, can potential allow a container app to interact with the desktop apps (TODO: show desktop isolation in future post)

Browsers have started having containers and specifically in-browser containers. It shows a trend towards containers in general, it is browser-specific and is dictated by usability (passwords, form and search data are shared between the containers).

In the following, our desktop computer will called the host, and the LXD container as the container.

Setting up LXD

LXD is supported in Ubuntu and derivatives, as well as other distributions. When you initially set up LXD, you select where to store the containers. See LXD 2.0: Installing and configuring LXD [2/12] about your options. Ideally, if you select to pre-allocate disk space or use a partition, select at least 15GB but preferably more.

If you plan to play games, increase the space by the size of that game. For best results, select ZFS as the storage backend, and place the space on an SSD disk. Also Trying out LXD containers on our Ubuntu may help.

Creating the LXD container

Let’s create the new container for LXD. We are going to call it guiapps, and install Ubuntu 16.04 in it. There are options for other Ubuntu versions, and even other distributions.

$ lxc launch ubuntu:x guiapps
Creating guiapps
Starting guiapps
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| guiapps       | RUNNING | 10.0.185.204(eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called guiapps.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. The second is glxinfo, that shows details about graphics acceleration. The third, glxgears, a minimal graphics-accelerated application. The fourth is speaker-test, to test for audio. We will know that our set up works, if all three xclock, glxinfo, glxgears and speaker-test work in the container!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ sudo apt update
ubuntu@guiapps:~$ sudo apt install x11-apps
ubuntu@guiapps:~$ sudo apt install mesa-utils
ubuntu@guiapps:~$ sudo apt install alsa-utils
ubuntu@guiapps:~$ exit $

We execute a login shell in the guiapps container as user ubuntu, the default non-root user account in all Ubuntu LXD images. Other distribution images probably have another default non-root user account.

Then, we run apt update in order to update the package list and be able to install the subsequent three packages that provide xclock, glxinfo and glxgears, and speaker-test (or aplay). Finally, we exit the container.

Mapping the user ID of the host to the container (PREREQUISITE)

In the following steps we will be sharing files from the host (our desktop) to the container. There is the issue of what user ID will appear in the container for those shared files.

First, we run on the host (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our guiapps LXD container, and restart the container for the change to take effect.

$ lxc config set guiapps raw.idmap "both $UID 1000"
$ lxc restart guiapps
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Configuring graphics and graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

Let’s attempt to run xclock in the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ xclock
Error: Can't open display: 
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock
Error: Can't open display: :0
ubuntu@guiapps:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add guiapps X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add guiapps Xauthority disk path=/home/ubuntu/.Xauthority source=${XAUTHORITY}

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container. For the host, this file can be found in the variable $XAUTHORITY and should be either at ~/.Xauthority or /run/myusername/1000/gdm/Xauthority. Obviously, we can set correctly the source= part, however the distribution in the container needs to be able to find the .Xauthority in the given location. If the container is the official Ubuntu, then it should be /home/ubuntu/.Xauthority Adjust accordingly if you use a different distribution. If something goes wrong in the whole guide, it most probably will be in this above two commands.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add guiapps mygpu gpu
$ lxc config device set guiapps mygpu uid 1000
$ lxc config device set guiapps mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). In addition to gpu device, we also set the permissions accordingly so that the device is fully accessible in  the container. The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running). Note that for Intel GPUs (my case), you may not need to add this device.

Let’s see what we got now.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock

ubuntu@guiapps:~$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Intel Open Source Technology Center (0x8086)
...
ubuntu@guiapps:~$ glxgears 

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
345 frames in 5.0 seconds = 68.783 FPS
309 frames in 5.0 seconds = 61.699 FPS
300 frames in 5.0 seconds = 60.000 FPS
^C
ubuntu@guiapps:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@guiapps:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Configuring audio

The audio server in Ubuntu desktop is Pulseaudio, and Pulseaudio has a feature to allow authenticated access over the network. Just like the X11 server and what we did earlier. Let’s do this.

We install the paprefs (PulseAudio Preferences) package on the host.

$ sudo apt install paprefs
...
$ paprefs

This is the only option we need to enable (by default all other options are not check and can remain unchecked).

That is, under the Network Server tab, we tick Enable network access to local sound devices.

Then, just like with the X11 configuration, we need to deal with two things; the access to the Pulseaudio server of the host (either through a Unix socket or an IP address), and some way to get authorization to access the Pulseaudio server. Regarding the Unix socket of the Pulseaudio server, it is a bit of hit and miss (could not figure out how to use reliably), so we are going to use the IP address of the host (lxdbr0 interface).

First, the IP address of the host (that has Pulseaudio) is the IP of the lxdbr0 interface, or the default gateway (ip link show). Second, the authorization is provided through the cookie in the host at /home/${USER}/.config/pulse/cookie Let’s connect these to files inside the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ echo export PULSE_SERVER="tcp:`ip route show 0/0 | awk '{print $3}'`" >> ~/.profile

This command will automatically set the variable PULSE_SERVER to a value like tcp:10.0.185.1, which is the IP address of the host, for the lxdbr0 interface. The next time we log in to the container, PULSE_SERVER will be configured properly.

ubuntu@guiapps:~$ mkdir -p ~/.config/pulse/
ubuntu@guiapps:~$ echo export PULSE_COOKIE=/home/ubuntu/.config/pulse/cookie >> ~/.profile
ubuntu@guiapps:~$ exit
$ lxc config device add guiapps PACookie disk path=/home/ubuntu/.config/pulse/cookie source=/home/${USER}/.config/pulse/cookie

Now, this is a tough cookie. By default, the Pulseaudio cookie is found at ~/.config/pulse/cookie. The directory tree ~/.config/pulse/ does not exist, and if we do not create it ourselves, then lxd config will autocreate it with the wrong ownership. So, we create it (mkdir -p), then add the correct PULSE_COOKIE line in the configuration file ~/.profile. Finally, we exit from the container and mount-bind the cookie from the host to the container. When we log in to the container again, the cookie variable will be correctly set!

Let’s test the audio!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@pulseaudio:~$ speaker-test -c6 -twav

speaker-test 1.1.0

Playback device is default
Stream parameters are 48000Hz, S16_LE, 6 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 32 to 349525
Period size range from 10 to 116509
Using max buffer size 349524
Periods = 4
was set period_size = 87381
was set buffer_size = 349524
 0 - Front Left
 4 - Center
 1 - Front Right
 3 - Rear Right
 2 - Rear Left
 5 - LFE
Time per period = 8.687798 ^C
ubuntu@pulseaudio:~$

If you do not have 6-channel audio output, you will hear audio on some of the channels only.

Let’s also test with an MP3 file, like that one from https://archive.org/details/testmp3testfile

ubuntu@pulseaudio:~$ sudo apt install mpg123
...
ubuntu@pulseaudio:~$ wget https://archive.org/download/testmp3testfile/mpthreetest.mp3
...
ubuntu@pulseaudio:~$ mplayer mpthreetest.mp3 
MPlayer 1.2.1 (Debian), built with gcc-5.3.1 (C) 2000-2016 MPlayer Team
...
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A:   3.7 (03.7) of 12.0 (12.0)  0.2% 

Exiting... (Quit)
ubuntu@pulseaudio:~$

All nice and loud!

Troubleshooting sound issues

AO: [pulse] Init failed: Connection refused

An application tries to connect to a PulseAudio server, but no PulseAudio server is found (either none autodetected, or the one we specified is not really there).

AO: [pulse] Init failed: Access denied

We specified a PulseAudio server, but we do not have access to connect to it. We need a valid cookie.

AO: [pulse] Init failed: Protocol error

You were trying as well to make the Unix socket work, but something was wrong. If you can make it work, write a comment below.

Testing with Firefox

Let’s test with Firefox!

ubuntu@guiapps:~$ sudo apt install firefox
...
ubuntu@guiapps:~$ firefox 
Gtk-Message: Failed to load module "canberra-gtk-module"

We get a message that the GTK+ module is missing. Let’s close Firefox, install the module and start Firefox again.

ubuntu@guiapps:~$ sudo apt-get install libcanberra-gtk3-module
ubuntu@guiapps:~$ firefox

Here we are playing a Youtube music video at 1080p. It works as expected. The Firefox session is separated from the host’s Firefox.

Note that the theming is not exactly what you get with Ubuntu. This is due to the container being so lightweight that it does not have any theming support.

The screenshot may look a bit grainy; this is due to some plugin I use in WordPress that does too much compression.

You may notice that no menubar is showing. Just like with Windows, simply press the Alt key for a second, and the menu bar will appear.

Testing with Chromium

Let’s test with Chromium!

ubuntu@guiapps:~$ sudo apt install chromium-browser
ubuntu@guiapps:~$ chromium-browser
Gtk-Message: Failed to load module "canberra-gtk-module"

So, chromium-browser also needs a libcanberra package, and it’s the GTK+ 2 package.

ubuntu@guiapps:~$ sudo apt install libcanberra-gtk-module
ubuntu@guiapps:~$ chromium-browser

There is no menubar and there is no easy way to get to it. The menu on the top-right is available though.

Testing with Chrome

Let’s download Chrome, install it and launch it.

ubuntu@guiapps:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
...
ubuntu@guiapps:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb
...
Errors were encountered while processing:
 google-chrome-stable
ubuntu@guiapps:~$ sudo apt install -f
...
ubuntu@guiapps:~$ google-chrome
[11180:11945:0503/222317.923975:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[11180:11945:0503/222317.924441:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
^C
ubuntu@guiapps:~$ sudo apt install upower
ubuntu@guiapps:~$ google-chrome

There are these two errors regarding UPower and they go away when we install the upower package.

Creating shortcuts to the container apps

If we want to run Firefox from the container, we can simply run

$ lxc exec guiapps -- sudo --login --user ubuntu firefox

and that’s it.

To make a shortcut, we create the following file on the host,

$ cat > ~/.local/share/applications/lxd-firefox.desktop[Desktop Entry]
Version=1.0
Name=Firefox in LXD
Comment=Access the Internet through an LXD container
Exec=/usr/bin/lxc exec guiapps -- sudo --login --user ubuntu firefox %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ chmod +x ~/.local/share/applications/lxd-firefox.desktop

We need to make it executable so that it gets picked up and we can then run it by double-clicking.

If it does not appear immediately in the Dash, use your File Manager to locate the directory ~/.local/share/applications/

This is how the icon looks like in a File Manager. The icon comes from the high-contrast set, which now I remember that it means just two colors 🙁

Here is the app on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the app at your fingertips.

I hope the tutorial was useful. We explain the commands in detail. In a future tutorial, we are going to try to figure out how to automate these!

Permanent link to this article: https://blog.simos.info/how-to-run-graphics-accelerated-gui-apps-in-lxd-containers-on-your-ubuntu-desktop/

May 01 2017

How to run Wine (graphics-accelerated) in an LXD container on Ubuntu

Update #1: Added info about adding the gpu configuration device to the container, for hardware acceleration to work (required for some users).

Update #2: Added info about setting the permissions for the gpu device.

Wine lets you run Windows programs on your GNU/Linux distribution.

When you install Wine, it adds all sort of packages, including 32-bit packages. It looks quite messy, could there be a way to place all those Wine files in a container and keep them there?

This is what we are going to see today. Specifically,

  1. We are going to create an LXD container, called wine-games
  2. We are going to set it up so that it runs graphics-accelerated programs. glxinfo will show the host GPU details.
  3. We are going to install the latest Wine package.
  4. We are going to install and play one of those Windows games.

Creating the LXD container

Let’s create the new container for LXD. If this is the first time you use LXD, have a look at Trying out LXD containers on our Ubuntu.

$ lxc launch ubuntu:x wine-games
Creating wine-games
Starting wine-games
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| wine-games    | RUNNING | 10.0.185.63 (eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called wine-games.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. And glxinfo, that shows details about graphics acceleration. We will know that our set up in Wine works, if both xclock and glxinfo work in the container!

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo apt update
ubuntu@wine-games:~$ sudo apt install x11-apps
ubuntu@wine-games:~$ sudo apt install mesa-utils
ubuntu@wine-games:~$ exit
$

We execute a login shell in the wine-games container as user ubuntu, the default non-root username in Ubuntu LXD images.

Then, we run apt update in order to update the package list and be able to install the subsequent two packages that provide xclock and glxinfo respectively. Finally, we exit the container.

Setting up for graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

First, we run (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command adds a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our wine-games LXD container, and restart the container for the change to take effect.

$ lxc config set wine-games raw.idmap "both $UID 1000"
$ lxc restart wine-games
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Let’s attempt to run xclock in the container.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ xclock
Error: Can't open display: 
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock
Error: Can't open display: :0
ubuntu@wine-games:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add wine-games X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add wine-games Xauthority disk path=/home/ubuntu/.Xauthority source=/home/MYUSERNAME/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add wine-games mygpu gpu
$ lxc config device set wine-games mygpu uid 1000
$ lxc config device set wine-games mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). [UPDATED] In addition, we set the uid/gui of the gpu device to 1000 (the default uid/gid of the first non-root account on Ubuntu; adapt accordingly on other distributions). The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running).

Let’s see what we got now.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock

ubuntu@wine-games:~$ glxinfo 
name of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
ubuntu@wine-games:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@wine-games:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Installing Wine

We install Wine in the container according to the instructions at https://wiki.winehq.org/Ubuntu.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo dpkg --add-architecture i386 
ubuntu@wine-games:~$ wget https://dl.winehq.org/wine-builds/Release.key
--2017-05-01 21:30:14--  https://dl.winehq.org/wine-builds/Release.key
Resolving dl.winehq.org (dl.winehq.org)... 151.101.112.69
Connecting to dl.winehq.org (dl.winehq.org)|151.101.112.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [application/pgp-keys]
Saving to: ‘Release.key’

Release.key                100%[=====================================>]   3.05K  --.-KB/s    in 0s      

2017-05-01 21:30:15 (24.9 MB/s) - ‘Release.key’ saved [3122/3122]

ubuntu@wine-games:~$ sudo apt-key add Release.key
OK
ubuntu@wine-games:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine-games:~$ sudo apt-get update
...
Reading package lists... Done
ubuntu@wine-games:~$ sudo apt-get install --install-recommends winehq-devel
...
Need to get 115 MB of archives.
After this operation, 715 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
ubuntu@wine-games:~$

715MB?!? Sure, bring it on. Whatever is installed in the container, stays in the container! 🙂

Let’s run a game in the container

Here is a game that looks good for our test, Season Match 4. Let’s play it.

ubuntu@wine-games:~$ wget http://cdn.gametop.com/free-games-download/Season-Match4.exe
ubuntu@wine-games:~$ wine Season-Match4.exe 
...
ubuntu@wine-games:~$ cd .wine/drive_c/Program\ Files\ \(x86\)/GameTop.com/Season\ Match\ 4/
ubuntu@wine-games:~/.wine/drive_c/Program Files (x86)/GameTop.com/Season Match 4$ wine SeasonMatch4.exe

Here is the game, and it works.It runs full screen and it is a bit weird to navigate between windows. The animations though are smooth.

We did not set up sound either in this post, nor did we make nice shortcuts so that we can run these apps with a single click. That’s material for a future tutorial!

Permanent link to this article: https://blog.simos.info/how-to-run-wine-graphics-accelerated-in-an-lxd-container-on-ubuntu/

Apr 28 2017

A closer look at the new ARM64 Scaleway servers and LXD

Update #1: I posted at the Scaleway Linux kernel discussion thread to add support for the Ubuntu Linux kernel and Add new bootscript with stock Ubuntu Linux kernel #349.

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail
2446
ubuntu@scw-ubuntu-arm64:~$

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$

Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

Permanent link to this article: https://blog.simos.info/a-closer-look-at-the-new-arm64-scaleway-servers-and-lxd/

Mar 15 2017

How to initialize LXD again

LXD is the pure-container hypervisor that is pre-installed in Ubuntu 16.04 (or newer) and also available in other GNU/Linux distributions.

When you first configure LXD, you need to make important decisions. Decisions that relate to where you are storing the containers, how big that space will be and also how to set up networking.

In this post we are going to see how to properly clean up LXD with the aim to initialize it again (lxd init).

If you haven’t used LXD at all, have a look at how to set up LXD on your desktop and come back in order to reinitialize together.

Before initializing again, let’s have a look as to what is going on on our system.

What LXD packages have we got installed?

LXD comes in two packages, the lxd package for the hypervisor and the lxd-client for the client utility. There is an extra package, lxd-tools, however this one is not essential at all.

Let’s check which versions we have installed.

$ apt policy lxd lxd-client
lxd:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
lxd-client:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
$ _

I am running Ubuntu 16.04 LTS, currently updated to 16.04.2. The current version of the LXD package is 2.0.9-0ubuntu1~16.04.2. You can see that there is an older version, which was a security update. And an even older version, version 2.0.0, which was the initial version that Ubuntu 16.04 was released with.

There is a PPA that has even more recent versions of LXD (currently at version 2.11), however as it is shown above, we do not have that one enabled here.

We will be uninstalling in a bit those two packages. There is an option to simply uninstall but also to uninstall with –purge. We need to figure out what LXD means in terms of installed files, in order to select whether to purge or not.

How are the containers stored and where are they located?

The containers can be stored either

  1. in subdirectories on the root (/) filesystem. Located at /var/lib/lxd/containers/ You get this when you configure LXD to use the dir storage backend.
  2. in a loop file that is formatted internally with the ZFS filesystem. Located at /var/lib/lxd/containers/zfs.img (or /var/lib/lxd/containers/disks/ in newer versions). You get this when you configure LXD to use the zfs storage backend (on a loop file and not a block device).
  3. in a block device (partition) that is formatted with ZFS (or btrfs). You get this when you configure LXD to use the zfs storage backend (not on a loop file but on a block device).

Let’s see all three cases!

In the following we assume we have a container called mytest, which is running.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.166 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+

Let’s see how it looks depending on the type of the storage backend.

Storage backend: dir

Let’s see the config!

$ lxc config show
config: {}
$ _

We are looking for configuration that refers to storage. We do not see any, therefore, this installation uses the dir storage backend.

Where are the files for the mytest container stored?

$ sudo ls -l /var/lib/lxd/containers/
total 8
drwxr-xr-x+ 4 165536 165536 4096 Μάρ  15 23:28 mytest
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 12
-rw-r--r--  1 root   root   1566 Μάρ   8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536 4096 Μάρ  15 23:28 rootfs
drwxr-xr-x  2 root   root   4096 Μάρ   8 05:16 templates
$ _

Each container can be find in /var/lib/lxd/containers/, in a subdirectory with the same name as the container name.

Inside there, in the rootfs/ directory we can find the filesystem of the container.

Storage backend: zfs

Let’s see how the config looks like!

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$

Okay, we are using ZFS for the storage backend. It is not clear yet whether we are using a loop file or a block device. How do we find that? With zpool status.

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME                    STATE     READ WRITE CKSUM
    lxd                     ONLINE       0     0     0
      /var/lib/lxd/zfs.img  ONLINE       0     0     0

errors: No known data errors

In the above example, the ZFS filesystem is stored in a loop file, located at /var/lib/lxd/zfs.img

However, in the following example,

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    lxd         ONLINE       0     0     0
      sda8      ONLINE       0     0     0

errors: No known data errors

the ZFS filesystem is located in a block device, in /dev/sda8.

Here is how the container files look like with ZFS (either on a loop file or on a block device),

$ sudo ls -l /var/lib/lxd/containers/
total 5
lrwxrwxrwx 1 root   root     34 Mar 15 23:43 mytest -> /var/lib/lxd/containers/mytest.zfs
drwxr-xr-x 4 165536 165536    5 Mar 15 23:43 mytest.zfs
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 4
-rw-r--r--  1 root   root   1566 Mar  8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536   22 Mar 15 23:43 rootfs
drwxr-xr-x  2 root   root      8 Mar  8 05:16 templates
$ mount | grep mytest.zfs
lxd/containers/mytest on /var/lib/lxd/containers/mytest.zfs type zfs (rw,relatime,xattr,noacl)
$ _

How to clean up the storage backend

When we try to run lxd init without cleaning up our storage, we get the following error,

$ lxd init
LXD init cannot be used at this time.
+However if all you want to do is reconfigure the network,
+you can still do so by running "sudo dpkg-reconfigure -p medium lxd"

error: You have existing containers or images. lxd init requires an empty LXD.
$ _

Yep, we need to clean up both the containers and any cached images.

Cleaning up the containers

We are going to list the containers, then stop them, and finally delete them. Until the list is empty.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.205 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+
$ lxc stop mytest
$ lxc delete mytest
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

It’s empty now!

Cleaning up the images

We are going to list the cached images, then delete them. Until the list is empty!

$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | 2cab90c0c342 | no     | ubuntu 16.04 LTS amd64 (release) (20170307) | x86_64 | 146.32MB | Mar 15, 2017 at 10:02pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
$ lxc image delete 2cab90c0c342
$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
$ _

Clearing up the ZFS storage

If we are using ZFS, here is how we clear up the ZFS pool.

First, we need to remove any reference of the ZFS pool from LXD. We just need to unset the configuration directive storage.zfs_pool_name.

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$ lxc config unset storage.zfs_pool_name
$ lxc config show
config: {}
$ _

Then, we can destroy the ZFS pool.

$ sudo zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
lxd   2,78G   664K  2,78G         -     7%     0%  1.00x  ONLINE  -
$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              544K  2,69G    19K  none
lxd/containers    19K  2,69G    19K  none
lxd/images        19K  2,69G    19K  none
$ sudo zpool destroy lxd
$ sudo zpool list
no pools available
$ sudo zfs list
no datasets available
$ _

Running “lxd init” again

At this point we are able to run lxd init again in order to initialize LXD again.

Common errors

Here is a collection of errors that I encountered when running lxd init. These errors should appear if we did not clean up properly as described earlier in this post.

I had been trying lots of variations, including different versions of LXD. You probably need to try hard to get these errors.

error: Provided ZFS pool (or dataset) isn’t empty

Here is how it looks:

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
error: Provided ZFS pool (or dataset) isn't empty
Exit 1

Whaaaat??? Something is wrong. The ZFS pool is not empty? What’s inside the ZFS pool?

$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              642K  14,4G    19K  none
lxd/containers    19K  14,4G    19K  none
lxd/images        19K  14,4G    19K  none

Okay, it’s just the two volumes that are left over. Let’s erase them!

$ sudo zfs destroy lxd/containers
$ sudo zfs destroy lxd/images
$ sudo zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
lxd    349K  14,4G    19K  none
$ _

Nice! Let’s run now lxd init.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.
$ _

That’s it! LXD is freshly configured!

error: Failed to create the ZFS pool: cannot create ‘lxd’: pool already exists

Here is how it looks,

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? yes
Path to the existing block device: /dev/sdb9 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Failed to create the ZFS pool: cannot create 'lxd': pool already exists
$ _

Here we forgot to destroy the ZFS pool called lxd. See earlier in this post on how to destroy the pool so that lxd init can recreate it.

Permission denied, are you in the lxd group?

This is a common error when you first install the lxd package because your non-root account needs to log out and log in again in order to enable the membership to the lxd Unix group.

However, we got this error when we were casually uninstalling and reinstalling the lxd package, and doing nasty tests. Let’s see more details.

$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ groups myusername
myusername : myusername adm cdrom sudo plugdev lpadmin lxd
$ newgrp lxd
$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ _

Whaaat!?! Permission denied and we are asked whether we are in the lxd group? We are members of the lxd group!

Well, the problem is whether the Unix socket that allows non-root users (members of the lxd Unix group) to access LXD has proper ownership.

$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root root 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ sudo chown :lxd /var/lib/lxd/unix.socket 
$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root lxd 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

The group of the Unix socket /var/lib/lxd/unix.socket was not set to the proper value lxd, therefore we set it ourselves. And then the LXD commands work just fine with our non-root user account!

error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open ‘lxd’: dataset does not exist

Here is a tricky error.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: lxd2
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
$ _

We cleaned up the ZFS pool just fine and we are running lxd init. But we got an error relating to the lxd pool that is already gone. Whaat?!?

What happened is that in this case, we forgot to FIRST unset the configuration option in LXD regarding the ZFS pool. We just forget to run lxc config unset storage.zfs_pool_name.

It’s fine then, let’s unset it now and go on with life.

$ lxc config unset storage.zfs_pool_name
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
Exit 1
$ _

Alright, we really messed up!

There are two ways to move forward. One, to rm -fr /var/lib/lxd/ and start over.

The other way is to edit the /var/lib/lxd/lxd.db Sqlite3 file and change the configuration setting from there. Here is how it works,

First, install the sqlitebrowser package and run sudo sqlitebrowser /var/lib/lxd/lxd.db

Second, get to the config table in sqlitebrowser as shown below.

Third, double-click on the value field (which as shown, says lxd) and clear it so it is shown as empty.

Fourth, click on File→Close Database and select to save the database. Let’s see now!

$ lxc config show
config:
  storage.zfs_pool_name: lxd

What?

Fifth, we need to start the LXD service so that LXD will read again the configuration.

$ sudo systemctl restart lxd.service
$ lxc config show
config: {}
$ _

That’s it! We are good to go!

Permanent link to this article: https://blog.simos.info/how-to-initialize-lxd-again/

%d bloggers like this: