Tag : ubuntu

post image

How to set up LXD on Packet.net (baremetal servers)

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
stepping : 8
microcode : 0x122
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat
bugs :
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common
 initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0
 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0
 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0
 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
 libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal
 libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2
 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv
 tcpdump udev unattended-upgrades
59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.3 MB of archives.
After this operation, 77.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ...

Configuration file '/etc/cloud/cloud.cfg'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
 What would you like to do about it ? Your options are:
 Y or I : install the package maintainer's version
 N or O : keep your currently-installed version
 D : show the differences between the versions
 Z : start a shell to examine the situation
 The default action is to keep your current version.
*** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N
Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4
root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd
lxd:
      Installed: (none)
      Candidate: 2.0.10-0ubuntu1~16.04.1
      Version table:
              2.0.10-0ubuntu1~16.04.1 500
                      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
              2.0.0-0ubuntu4 500
                      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd
...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y

root@lxd:~# ssh myusername@localhost
Permission denied (publickey).
root@lxd:~# cp -R ~/.ssh/ ~myusername/
root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

**************************************************************************
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on Amazon EC2 or one of cloud-init's known platforms that #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
# #
# If you are seeing this message, please file a bug against #
# cloud-init at #
# https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid #
# Make sure to include the cloud provider your instance is #
# running on. #
# #
# For more information see #
# https://bugs.launchpad.net/bugs/1660385 #
# #
# After you have filed a bug, you can disable this warning by #
# launching your instance with the cloud-config below, or #
# putting that content into #
# /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg #
# #
# #cloud-config #
# datasource: #
# Ec2: #
# strict_id: false #
**************************************************************************

Disable the warnings above by:
 touch /home/myusername/.cloud-warnings.skip
or
 touch /var/lib/cloud/instance/warnings/.skip
myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 136G 1.1G 128G 1% /
myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs 
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd 
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=27]: 100
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

LXD has been successfully configured.
myusername@lxd:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
myusername@lxd:~$

Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (47.99MB/s) 
Starting web 
myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS
+------+---------+---------------------+------------+-----------+
| NAME | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------------+-----------+
| web  | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+------------+-----------+
myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on Packet.net in an LXD container!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx on Packet.net in an LXD container!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig 
bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8 
      inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254
      inet6 addr: 2604:1380:2000:600::1/127 Scope:Global
      inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      RX packets:144216 errors:0 dropped:0 overruns:0 frame:0
      TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@lxd:~$

Let’s test it out!

That’s it!

post image

Πως χρησιμοποιούμε περιέκτες LXD (LXD containers) στο Ubuntu και άλλες διανομές

Ξέρουμε για τις εικονικές μηχανές (virtual machines) όπως Virtualbox και VMWare, υπάρχουν όμως και οι περιέκτες (containers) όπως Docker και LXD (προφέρεται λεξ ντι).

Εδώ θα δούμε για τους περιέκτες LXD (LXD containers), με την υποστήριξη να είναι ήδη διαθέσιμη σε όσους έχουν Ubuntu 16.04 ή νεότερο. Για τις υπόλοιπες διανομές χρειάζεται εγκατάσταση του πακέτου LXD.

Συγκεκριμένα, σήμερα θα δούμε:

  1. Τι είναι και τι προσφέρει το LXD;
  2. Πως κάνουμε την αρχική ρύθμιση του LXD σε Ubuntu Desktop (ή Ubuntu Server);
  3. Πως δημιουργούμε τον πρώτο μας περιέκτη;
  4. Πως εγκαθιστούμε τον nginx μέσα σε περιέκτη;
  5. Ποιες είναι κάποιες από τις άλλες πρακτικές χρήσεις των περιεκτών LXD;

Για τα παρακάτω, θεωρούμε ότι έχουμε Ubuntu 16.04 ή νεότερο. Ubuntu Desktop ή Ubuntu Server είναι μια χαρά.

Τι είναι και τι προσφέρει το LXD;

Υπάρχει ο όρος περιέκτες Linux (Linux containers, LXC) που περιγράφει τη νέα δυνατότητα που προσφέρει ο πυρήνας του Linux να περιορίζεται η εκτέλεση μιας θυγατρικής διεργασίας (μέσω namespaces, cgroups) ώστε να επιτρέπεται να γίνονται μόνο όσα έχουμε δηλώσει. Με το Docker, μπορούμε να τρέξουμε (τυπικά) μια διεργασία κάτω από περιορισμούς (process container). Με το LXD όμως, μπορούμε να τρέξουμε μια ολόκληρη διανομή κάτω από περιορισμούς (machine container).

Το LXD είναι λογισμικό επόπτη (hypervisor) που επιτρέπει τον πλήρη έλεγχο του κύκλου ζωής των περιεκτών. Συγκεκριμένα,

  • επιτρέπει την αρχικοποίηση των ρυθμίσεων καθώς και του χώρου όπου θα αποθηκεύονται οι περιέκτες. Μετά την αρχικοποίηση, δεν χρειάζεται να ασχοληθούμε ξανά με αυτές τις λεπτομέρειες.
  • παρέχει αποθετήρια με έτοιμες εικόνες (images) από μια σειρά διανομών. Υπάρχει Ubuntu (από 12.04 έως 17.04, Ubuntu Core), Alpine, Debian (strech, wheezy), Fedora (22, 23, 24, 25), Gentoo, OpenSUSE, Oracle, Plamo και Sabayon. Αυτά είναι διαθέσιμα στις αρχιτεκτονικές amd64, i386, armhf, armel, powerpc, ppc64el και s390x.
  • επιτρέπει την εκκίνηση μιας εικόνας μέσα σε λίγα δευτερόλεπτα. Μια εικόνα που έχει εκκινηθεί, αποτελεί έναν περιέκτη.
  • μπορούμε να πάρουμε αντίγραφο ασφάλειας ενός περιέκτη, να τον μεταφέρουμε μέσω δικτύου σε άλλη εγκατάσταση LXD, κτλ.

Η τυπική χρήση των περιεκτών LXD είναι στο να τρέχουμε υπηρεσίες διαδικτύου όπως WordPress, με στόχο να έχουμε σε ξεχωριστό περιέκτη κάθε διαφορετικό δικτυακό τόπο. Έτσι, απομονώνουμε τις υπηρεσίες και μπορούμε να τις διαχειριστούμε καλύτερα. Σε σχέση με τις εικονικές μηχανές, οι περιέκτες LXD απαιτούν πολύ λιγότερους πόρους. Για παράδειγμα, σε υπολογιστή με Ubuntu Desktop και 4GB RAM, μπορούμε να τρέξουμε άνετα και δέκα περιέκτες LXD.

Αρχικές ρυθμίσεις του LXD

Τώρα θα ρυθμίσουμε το LXD στον υπολογιστή μας. Αν για κάποιο λόγο δεν θέλετε να το κάνετε, μπορείτε να δοκιμάσετε το LXD και μέσω διαδικτύου από τη δωρεάν υπηρεσία δοκιμής του LXD.

Θα εκτελέσουμε την εντολή lxd init ως διαχειριστές ώστε να γίνει η αρχική ρύθμιση του LXD.

$ sudo lxd init
Name of the storage backend to use (dir or zfs): dir
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> Θα ρωτήσει για ρυθμίσεις δικτύου. Αποδεχόμαστε ό,τι προταθεί και συνεχίζουμε.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

Μας ρώτησε για το storage backend (υποστήριξη αποθήκευσης) και επιλέξαμε dir. Αυτή είναι η πιο απλή επιλογή, και τα αρχεία κάθε περιέκτη θα τοποθετηθούν σε υποκατάλογο στο /var/lib/lxd/. Για πιο σοβαρή χρήση, θα επιλέγαμε zfs. Αν θέλετε να δοκιμάσετε με zfs, αποδεχτείτε ό,τι προταθεί και επιλέξτε να διαθέσετε τουλάχιστον 15GB χώρο.

Με τη ρύθμιση της γέφυρας LXD (LXD bridge), γίνεται η ρύθμιση του διαδικτύου για τους περιέκτες. Αυτό που θα γίνει, είναι ότι το LXD θα παρέχει έναν εξυπηρετητή DHCP (τον dnsmasq) για τους περιέκτες ώστε να τους αποδώσει διεθύνσεις IP τύπου 10.x.x.x και να επιτρέψει την πρόσβαση στο διαδίκτυο.

Τώρα είμαστε σχεδόν έτοιμοι να τρέξουμε εντολές lxc για τη διαχείρηση εικόνων και περιεκτών LXD. Ας επιβεβαιώσουμε ότι ο λογαριασμός χρήση μπορεί να τρέξει εντολές για το LXD. Εδώ χρειάζεται ο χρήστης μας να ανήκει την ομάδα (group) με όνομα lxd. Δηλαδή,

$ groups myusername
myusername : myusername adm cdrom sudo vboxusers lxd

Αν δεν είμασταν μέλη της ομάδας lxd, τότε θα χρειαζόταν να τρέξουμε

$ sudo usermod --append --groups lxd myusername
$ _

και μετά να αποσυνδεθούμε (log out) και να συνδεθούμε (log in) ξανά.

Πως δημιουργούμε έναν περιέκτη

Πρώτα ας τρέξουμε την εντολή που δείχνει τι περιέκτες υπάρχουν. Θα δείξει κενό.

$ lxc list
If this is your first time using LXD, you should also run: lxd init
To start your first container, try: lxc launch ubuntu:16.04
+---------+---------+-----------------+-------------------+------------+-----------+
|  NAME   |  STATE  |      IPV4       |       IPV6        |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------+-------------------+------------+-----------+
+---------+---------+-----------------+-------------------+------------+-----------+

Όλες οι εντολές διαχείρισης περιεκτών LXD ξεκινούν με lxc και μετά ακολουθεί ένα ρήμα. Το lxc list (ρήμα: list) δείχνει τους διαθέσιμους περιέκτες.

Βλέπουμε ήδη ότι το ρήμα για να ξεκινήσουμε τον πρώτο περιέκτη, είναι το launch. Μετά ακολουθεί το όνομα του αποθετηρίου, το ubuntu:, και τέλος το αναγνωριστικό της εικόνας (16.04).

Υπάρχουν δύο διαθέσιμα αποθετήρια με εικόνες, το ubuntu: και το images:. Για να δούμε τις διαθέσιμες εικόνες στο ubuntu:, τρέχουμε

$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
| x (9 more)         | 8fa08537ae51 | yes    | ubuntu 16.04 LTS amd64 (release) (20170516)     | x86_64  | 153.70MB | May 16, 2017 at 12:00am (UTC) |
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
...$ _

Βλέπουμε ότι το Ubuntu 16.04 έχει αρκετά ψευδώνυμα, και εκτός από το «16.04», υπάρχει και το «x» (από το xenial).

Ας χρησιμοποιήσουμε την εικόνα του Ubuntu 16.04 (ubuntu:x) για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη.

$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Starting mycontainer
$ _

Εδώ χρησιμοποιήσαμε ως μήτρα την εικόνα ubuntu:x για να δημιουργήσουμε και να εκκινήσουμε έναν περιέκτη με όνομα mycontainer, που τρέχει Ubuntu 16.04.

$ lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |        IPV4         |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| mycontainer | RUNNING | 10.0.180.12 (eth0)  | fd42:accb:3958:4ca6:216:57ff:f0ff:1afa (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
$ _

Και αυτός είναι ο πρώτος μας περιέκτης! Είναι σε εκτέλεση, και έχει και διευθύνση IP. Ας δοκιμάσουμε:

$ ping 10.0.180.12
PING 10.0.180.12 (10.0.180.12) 56(84) bytes of data.
64 bytes from 10.0.180.12: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 10.0.180.12: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 10.0.180.12: icmp_seq=3 ttl=64 time=0.035 ms
^C
--- 10.0.180.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.035/0.035/0.036/0.004 ms
$ _

Ας εκτελέσουμε μια εντολή μέσα στον περιέκτη!

$ lxc exec mycontainer -- uname -a
Linux mycontainer 4.8.0-53-generic #56~16.04.1-Ubuntu SMP Tue May 16 01:18:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ _

Εδώ χρησιμοποιήσαμε το ρήμα exec, που δέχεται ως παράμετρο το όνομα του περιέκτη, και μετά την εντολή που θα τρέξει μέσα στον περιέκτη. Αυτό το αποτελεί αναγνωριστικό για το φλοιό μας ώστε να σταματήσει να ψάχνει για παραμέτρους. Αν δεν βάζαμε το –, τότε ο φλοιός bash θα θεωρούσε ότι το -a είναι μια παράμετρος για την εντολή lxc και θα υπήρχε πρόβλημα.

Βλέπουμε ότι οι περιέκτες τρέχουν τον πυρήνα του συστήματός μας. Όταν εκκινούμε έναν περιέκτη, ξεκινά η εκτέλεση του λογισμικού χρήστη (user-space) μιας εικόνας διανομής. Δεν ξεκινά η εκτέλεση ενός νέου πυρήνα, με αποτέλεσμα να μοιράζονται όλοι οι περιέκτες τον ίδιο πυρήνα, ακόμα και αν ανήκουν σε διαφορετικές διανομές.

Ας δημιουργήσουμε έναν φλοιό στον περιέκτη ώστε να τρέξουμε περισσότερες εντολές!

$ lxc exec mycontainer -- /bin/bash
root@mycontainer:~# exit
$

Αυτό ήταν! Μπορούμε να τρέξουμε ό,τι θέλουμε στον περιέκτη ως διαχειριστές. Αν σβήσουμε κάτι, τότε αυτό θα σβηστεί μέσα στον περιέκτη και δεν επηρεάζει το σύστημά μας.

Οι εικόνες Ubuntu έρχονται με ένα απλό λογαριασμό με όνομα ubuntu, οπότε μπορούμε να συνδεόμαστε και με το λογαριασμό αυτό. Να πως,

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ exit
$

Αυτό που κάναμε, ήταν να τρέξουμε την εντολή sudo ώστε να γίνουμε χρήστης ubuntu, και να λάβουμε έναν φλοιό εισόδου (login).

Πως εγκαθιστούμε μια υπηρεσία δικτύου σε ένα περιέκτη LXD

Ας εγκαταστήσουμε έναν εξυπηρετητή Web στον περιέκτη.

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ sudo apt update
...ubuntu@mycontainer:~$ sudo apt install nginx
...ubuntu@mycontainer:~$ sudo lsof -i
COMMAND   PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  231     root    6u  IPv4 141913      0t0  UDP *:bootpc 
sshd      323     root    3u  IPv4 142683      0t0  TCP *:ssh (LISTEN)
sshd      323     root    4u  IPv6 142692      0t0  TCP *:ssh (LISTEN)
nginx    1183     root    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1183     root    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1184 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1185 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1186 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    6u  IPv4 151536      0t0  TCP *:http (LISTEN)
nginx    1187 www-data    7u  IPv6 151537      0t0  TCP *:http (LISTEN)
ubuntu@mycontainer:~$

Ενημερώσαμε τη λίστα πακέτων μέσα στον περιέκτη και εγκαταστήσαμε το πακέτο nginx (άλλη επιλογή: apache2). Μετά τρέξαμε την εντολή lsof -i ώστε να επιβεβαιώσουμε ότι η υπηρεσία είναι σε λειτουργία.

Βλέπουμε ότι τρέχει από προεπιλογή ο sshd. Ωστόσο χρειάζεται να βάλουμε οι ίδιοι τιμές στο ~/.ssh/authorized_keys ώστε να μπορέσουμε να συνδεθούμε. Οι λογαριασμοί root και ubuntu είναι κλειδωμένοι από προεπιλογή.

Βλέπουμε ακόμα ότι είναι σε λειτουργία και ο εξυπηρετητής Web nginx.

Και πράγματι, είναι προσβάσιμος από τον περιηγητή μας.

Εδώ κάνουμε ό,τι άλλους πειραματισμούς θέλουμε. Για πληρότητα, ας δούμε πως σταματάμε τον περιέκτη και τον σβήνουμε.

ubuntu@mycontainer:~$ exit
logout
$ lxc stop mycontainer
$ lxc delete mycontainer
$

Αυτό ήταν! Σταματήσαμε τον περιέκτη mycontainer και έπειτα τον σβήσαμε.

Πρακτικές χρήσεις περιεκτών LXD

Ας δούμε μερικές πρακτικές χρήσεις περιεκτών LXD,

  1. Θέλουμε στο φορητό μας να εγκαταστήσουμε μια υπηρεσία δικτύου αλλά δεν θέλουμε μετά να ξεμείνουν τα εγκατεστημένα πακέτα. Εγκαθιστούμε σε περιέκτη και μετά τον σταματάμε (ή σβήνουμε)
  2. Θέλουμε να δοκιμάσουμε μια παλιά εφαρμογή PHP που για κάποιο λόγο δεν τρέχει σε PHP7 (Ubuntu 16.04). Εγκαθιστούμε σε έναν περιέκτη το Ubuntu 14.04 («ubuntu:t»), οπότε θα έχει PHP 5.x.
  3. Θέλουμε να εγκαταστήσουμε μια εφαρμογή στο Wine αλλά ΔΕΝ ΘΕΛΟΥΜΕ να εγκατασταθούν όλα αυτά τα πακέτα που φέρνει το Wine. Εγκαθιστούμε το Wine σε περιέκτη LXD.
  4. Θέλουμε να εγκαταστήσουμε μια εφαρμογή γραφικού περιβάλλοντος με χρήση επιτάχυνσης υλικού για γραφικά, αλλά να μην μπλέξει με το σύστημά μας. Εγκαθιστούμε την εφαρμογή γραφικού περιβάλλοντος σε περιέκτη LXD.
  5. Έχουμε δύο λογαριασμούς Steam. Πως; Εγκαθιστούμε το Steam δύο φορές σε δύο περιέκτες.
  6. Θέλουμε να φιλοξενήσουμε πολλούς διαδικτυακούς τόπους στο VPS μας, και θέλουμε να υπάρχει διαχωρισμός μεταξύ τους. Εγκαθιστούμε κάθε δικτυακό τόπο σε ξεχωριστό περιέκτη.

Αν έχετε απορίες ή θέλετε υποστήριξη, ρωτήστε εδώ ή στις άλλες υπηρεσίες της κοινότητας Ubuntu Greece.

post image

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

In How to run Wine (graphics-accelerated) in an LXD container on Ubuntu we had a quick look into how to run GUI programs in an LXD (Lex-Dee) container, and have the output appear on the local X11 server (your Ubuntu desktop).

In this post, we are going to see how to

  1. generalize the instructions in order to run most GUI apps in a LXD container but appear on your desktop
  2. have accelerated graphics support and audio
  3. test with Firefox, Chromium and Chrome
  4. create shortcuts to easily launch those apps

The benefits in running GUI apps in a LXD container are

  • clear separation of the installation data and settings, from what we have on our desktop
  • ability to create a snapshot of this container, save, rollback, delete, recreate; all these in a few seconds or less
  • does not mess up your installed package list (for example, all those i386 packages for Wine, Google Earth)
  • ability to create an image of such a perfect container, publish, and have others launch in a few clicks

What we are doing today is similar to having a Virtualbox/VMWare VM and running a Linux distribution in it. Let’s compare,

  • It is similar to the Virtualbox Seamless Mode or the VMWare Unity mode
  • A VM virtualizes a whole machine and has to do a lot of work in order to provide somewhat good graphics acceleration
  • With a container, we directly reuse the graphics card and get graphics acceleration
  • The specific set up we show today, can potential allow a container app to interact with the desktop apps (TODO: show desktop isolation in future post)

Browsers have started having containers and specifically in-browser containers. It shows a trend towards containers in general, it is browser-specific and is dictated by usability (passwords, form and search data are shared between the containers).

In the following, our desktop computer will called the host, and the LXD container as the container.

Setting up LXD

LXD is supported in Ubuntu and derivatives, as well as other distributions. When you initially set up LXD, you select where to store the containers. See LXD 2.0: Installing and configuring LXD [2/12] about your options. Ideally, if you select to pre-allocate disk space or use a partition, select at least 15GB but preferably more.

If you plan to play games, increase the space by the size of that game. For best results, select ZFS as the storage backend, and place the space on an SSD disk. Also Trying out LXD containers on our Ubuntu may help.

Creating the LXD container

Let’s create the new container for LXD. We are going to call it guiapps, and install Ubuntu 16.04 in it. There are options for other Ubuntu versions, and even other distributions.

$ lxc launch ubuntu:x guiapps
Creating guiapps
Starting guiapps
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| guiapps       | RUNNING | 10.0.185.204(eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called guiapps.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. The second is glxinfo, that shows details about graphics acceleration. The third, glxgears, a minimal graphics-accelerated application. The fourth is speaker-test, to test for audio. We will know that our set up works, if all three xclock, glxinfo, glxgears and speaker-test work in the container!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ sudo apt update
ubuntu@guiapps:~$ sudo apt install x11-apps
ubuntu@guiapps:~$ sudo apt install mesa-utils
ubuntu@guiapps:~$ sudo apt install alsa-utils
ubuntu@guiapps:~$ exit $

We execute a login shell in the guiapps container as user ubuntu, the default non-root user account in all Ubuntu LXD images. Other distribution images probably have another default non-root user account.

Then, we run apt update in order to update the package list and be able to install the subsequent three packages that provide xclock, glxinfo and glxgears, and speaker-test (or aplay). Finally, we exit the container.

Mapping the user ID of the host to the container (PREREQUISITE)

In the following steps we will be sharing files from the host (our desktop) to the container. There is the issue of what user ID will appear in the container for those shared files.

First, we run on the host (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our guiapps LXD container, and restart the container for the change to take effect.

$ lxc config set guiapps raw.idmap "both $UID 1000"
$ lxc restart guiapps
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Configuring graphics and graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

Let’s attempt to run xclock in the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ xclock
Error: Can't open display: 
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock
Error: Can't open display: :0
ubuntu@guiapps:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add guiapps X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add guiapps Xauthority disk path=/home/ubuntu/.Xauthority source=${XAUTHORITY}

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container. For the host, this file can be found in the variable $XAUTHORITY and should be either at ~/.Xauthority or /run/myusername/1000/gdm/Xauthority. Obviously, we can set correctly the source= part, however the distribution in the container needs to be able to find the .Xauthority in the given location. If the container is the official Ubuntu, then it should be /home/ubuntu/.Xauthority Adjust accordingly if you use a different distribution. If something goes wrong in the whole guide, it most probably will be in this above two commands.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add guiapps mygpu gpu
$ lxc config device set guiapps mygpu uid 1000
$ lxc config device set guiapps mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). In addition to gpu device, we also set the permissions accordingly so that the device is fully accessible in  the container. The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running). Note that for Intel GPUs (my case), you may not need to add this device.

Let’s see what we got now.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ export DISPLAY=:0
ubuntu@guiapps:~$ xclock

ubuntu@guiapps:~$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Intel Open Source Technology Center (0x8086)
...
ubuntu@guiapps:~$ glxgears 

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
345 frames in 5.0 seconds = 68.783 FPS
309 frames in 5.0 seconds = 61.699 FPS
300 frames in 5.0 seconds = 60.000 FPS
^C
ubuntu@guiapps:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@guiapps:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Configuring audio

The audio server in Ubuntu desktop is Pulseaudio, and Pulseaudio has a feature to allow authenticated access over the network. Just like the X11 server and what we did earlier. Let’s do this.

We install the paprefs (PulseAudio Preferences) package on the host.

$ sudo apt install paprefs
...
$ paprefs

This is the only option we need to enable (by default all other options are not check and can remain unchecked).

That is, under the Network Server tab, we tick Enable network access to local sound devices.

Then, just like with the X11 configuration, we need to deal with two things; the access to the Pulseaudio server of the host (either through a Unix socket or an IP address), and some way to get authorization to access the Pulseaudio server. Regarding the Unix socket of the Pulseaudio server, it is a bit of hit and miss (could not figure out how to use reliably), so we are going to use the IP address of the host (lxdbr0 interface).

First, the IP address of the host (that has Pulseaudio) is the IP of the lxdbr0 interface, or the default gateway (ip link show). Second, the authorization is provided through the cookie in the host at /home/${USER}/.config/pulse/cookie Let’s connect these to files inside the container.

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@guiapps:~$ echo export PULSE_SERVER="tcp:`ip route show 0/0 | awk '{print $3}'`" >> ~/.profile

This command will automatically set the variable PULSE_SERVER to a value like tcp:10.0.185.1, which is the IP address of the host, for the lxdbr0 interface. The next time we log in to the container, PULSE_SERVER will be configured properly.

ubuntu@guiapps:~$ mkdir -p ~/.config/pulse/
ubuntu@guiapps:~$ echo export PULSE_COOKIE=/home/ubuntu/.config/pulse/cookie >> ~/.profile
ubuntu@guiapps:~$ exit
$ lxc config device add guiapps PACookie disk path=/home/ubuntu/.config/pulse/cookie source=/home/${USER}/.config/pulse/cookie

Now, this is a tough cookie. By default, the Pulseaudio cookie is found at ~/.config/pulse/cookie. The directory tree ~/.config/pulse/ does not exist, and if we do not create it ourselves, then lxd config will autocreate it with the wrong ownership. So, we create it (mkdir -p), then add the correct PULSE_COOKIE line in the configuration file ~/.profile. Finally, we exit from the container and mount-bind the cookie from the host to the container. When we log in to the container again, the cookie variable will be correctly set!

Let’s test the audio!

$ lxc exec guiapps -- sudo --login --user ubuntu
ubuntu@pulseaudio:~$ speaker-test -c6 -twav

speaker-test 1.1.0

Playback device is default
Stream parameters are 48000Hz, S16_LE, 6 channels
WAV file(s)
Rate set to 48000Hz (requested 48000Hz)
Buffer size range from 32 to 349525
Period size range from 10 to 116509
Using max buffer size 349524
Periods = 4
was set period_size = 87381
was set buffer_size = 349524
 0 - Front Left
 4 - Center
 1 - Front Right
 3 - Rear Right
 2 - Rear Left
 5 - LFE
Time per period = 8.687798 ^C
ubuntu@pulseaudio:~$

If you do not have 6-channel audio output, you will hear audio on some of the channels only.

Let’s also test with an MP3 file, like that one from https://archive.org/details/testmp3testfile

ubuntu@pulseaudio:~$ sudo apt install mpg123
...
ubuntu@pulseaudio:~$ wget https://archive.org/download/testmp3testfile/mpthreetest.mp3
...
ubuntu@pulseaudio:~$ mplayer mpthreetest.mp3 
MPlayer 1.2.1 (Debian), built with gcc-5.3.1 (C) 2000-2016 MPlayer Team
...
AO: [pulse] 44100Hz 2ch s16le (2 bytes per sample)
Video: no video
Starting playback...
A:   3.7 (03.7) of 12.0 (12.0)  0.2% 

Exiting... (Quit)
ubuntu@pulseaudio:~$

All nice and loud!

Troubleshooting sound issues

AO: [pulse] Init failed: Connection refused

An application tries to connect to a PulseAudio server, but no PulseAudio server is found (either none autodetected, or the one we specified is not really there).

AO: [pulse] Init failed: Access denied

We specified a PulseAudio server, but we do not have access to connect to it. We need a valid cookie.

AO: [pulse] Init failed: Protocol error

You were trying as well to make the Unix socket work, but something was wrong. If you can make it work, write a comment below.

Testing with Firefox

Let’s test with Firefox!

ubuntu@guiapps:~$ sudo apt install firefox
...
ubuntu@guiapps:~$ firefox 
Gtk-Message: Failed to load module "canberra-gtk-module"

We get a message that the GTK+ module is missing. Let’s close Firefox, install the module and start Firefox again.

ubuntu@guiapps:~$ sudo apt-get install libcanberra-gtk3-module
ubuntu@guiapps:~$ firefox

Here we are playing a Youtube music video at 1080p. It works as expected. The Firefox session is separated from the host’s Firefox.

Note that the theming is not exactly what you get with Ubuntu. This is due to the container being so lightweight that it does not have any theming support.

The screenshot may look a bit grainy; this is due to some plugin I use in WordPress that does too much compression.

You may notice that no menubar is showing. Just like with Windows, simply press the Alt key for a second, and the menu bar will appear.

Testing with Chromium

Let’s test with Chromium!

ubuntu@guiapps:~$ sudo apt install chromium-browser
ubuntu@guiapps:~$ chromium-browser
Gtk-Message: Failed to load module "canberra-gtk-module"

So, chromium-browser also needs a libcanberra package, and it’s the GTK+ 2 package.

ubuntu@guiapps:~$ sudo apt install libcanberra-gtk-module
ubuntu@guiapps:~$ chromium-browser

There is no menubar and there is no easy way to get to it. The menu on the top-right is available though.

Testing with Chrome

Let’s download Chrome, install it and launch it.

ubuntu@guiapps:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
...
ubuntu@guiapps:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb
...
Errors were encountered while processing:
 google-chrome-stable
ubuntu@guiapps:~$ sudo apt install -f
...
ubuntu@guiapps:~$ google-chrome
[11180:11945:0503/222317.923975:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[11180:11945:0503/222317.924441:ERROR:object_proxy.cc(583)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
^C
ubuntu@guiapps:~$ sudo apt install upower
ubuntu@guiapps:~$ google-chrome

There are these two errors regarding UPower and they go away when we install the upower package.

Creating shortcuts to the container apps

If we want to run Firefox from the container, we can simply run

$ lxc exec guiapps -- sudo --login --user ubuntu firefox

and that’s it.

To make a shortcut, we create the following file on the host,

$ cat > ~/.local/share/applications/lxd-firefox.desktop[Desktop Entry]
Version=1.0
Name=Firefox in LXD
Comment=Access the Internet through an LXD container
Exec=/usr/bin/lxc exec guiapps -- sudo --login --user ubuntu firefox %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ chmod +x ~/.local/share/applications/lxd-firefox.desktop

We need to make it executable so that it gets picked up and we can then run it by double-clicking.

If it does not appear immediately in the Dash, use your File Manager to locate the directory ~/.local/share/applications/

This is how the icon looks like in a File Manager. The icon comes from the high-contrast set, which now I remember that it means just two colors 🙁

Here is the app on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the app at your fingertips.

I hope the tutorial was useful. We explain the commands in detail. In a future tutorial, we are going to try to figure out how to automate these!

post image

How to run Wine (graphics-accelerated) in an LXD container on Ubuntu

Update #1: Added info about adding the gpu configuration device to the container, for hardware acceleration to work (required for some users).

Update #2: Added info about setting the permissions for the gpu device.

Wine lets you run Windows programs on your GNU/Linux distribution.

When you install Wine, it adds all sort of packages, including 32-bit packages. It looks quite messy, could there be a way to place all those Wine files in a container and keep them there?

This is what we are going to see today. Specifically,

  1. We are going to create an LXD container, called wine-games
  2. We are going to set it up so that it runs graphics-accelerated programs. glxinfo will show the host GPU details.
  3. We are going to install the latest Wine package.
  4. We are going to install and play one of those Windows games.

Creating the LXD container

Let’s create the new container for LXD. If this is the first time you use LXD, have a look at Trying out LXD containers on our Ubuntu.

$ lxc launch ubuntu:x wine-games
Creating wine-games
Starting wine-games
$ lxc list
+---------------+---------+--------------------+--------+------------+-----------+
|     NAME      |  STATE  |        IPV4        |  IPV6  |    TYPE    | SNAPSHOTS |
+---------------+---------+--------------------+--------+------------+-----------+
| wine-games    | RUNNING | 10.0.185.63 (eth0) |        | PERSISTENT | 0         |
+---------------+---------+--------------------+--------+------------+-----------+
$

We created and started an Ubuntu 16.04 (ubuntu:x) container, called wine-games.

Let’s also install our initial testing applications. The first one is xclock, the simplest X11 GUI app. And glxinfo, that shows details about graphics acceleration. We will know that our set up in Wine works, if both xclock and glxinfo work in the container!

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo apt update
ubuntu@wine-games:~$ sudo apt install x11-apps
ubuntu@wine-games:~$ sudo apt install mesa-utils
ubuntu@wine-games:~$ exit
$

We execute a login shell in the wine-games container as user ubuntu, the default non-root username in Ubuntu LXD images.

Then, we run apt update in order to update the package list and be able to install the subsequent two packages that provide xclock and glxinfo respectively. Finally, we exit the container.

Setting up for graphics acceleration

For graphics acceleration, we are going to use the host graphics card and graphics acceleration. By default, the applications that run in a container do not have access to the host system and cannot start GUI apps.

We need two things; let the container to access the GPU devices of the host, and make sure that there are no restrictions because of different user-ids.

First, we run (only once) the following command (source),

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command adds a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (runs as root) to remap our user’s ID ($UID, from the host) as requested.

Then, we specify that we want this feature in our wine-games LXD container, and restart the container for the change to take effect.

$ lxc config set wine-games raw.idmap "both $UID 1000"
$ lxc restart wine-games
$

This “both $UID 1000” syntax is a shortcut that means to map the $UID/$GID of our username in the host, to the default non-root username in the container (which should be 1000 for Ubuntu images, at least).

Let’s attempt to run xclock in the container.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ xclock
Error: Can't open display: 
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock
Error: Can't open display: :0
ubuntu@wine-games:~$ exit
$

We run xclock in the container, and as expected it does not run because we did not indicate where to send the display. We set the DISPLAY environment variable to the default :0 (send to either a Unix socket or port 6000), which do not work either because we did not fully set them up yet. Let’s do that.

$ lxc config device add wine-games X0 disk path=/tmp/.X11-unix/X0 source=/tmp/.X11-unix/X0 
$ lxc config device add wine-games Xauthority disk path=/home/ubuntu/.Xauthority source=/home/MYUSERNAME/.Xauthority

We give access to the Unix socket of the X server (/tmp/.X11-unix/X0) to the container, and make it available at the same exactly path inside the container. In this way, DISPLAY=:0 would allow the apps in the containers to access our host’s X server through the Unix socket.

Then, we repeat this task with the ~/.Xauthority file that resides in our home directory. This file is for access control, and simply makes our host X server to allow the access from applications inside that container.

How do we get hardware acceleration for the GPU to the container apps? There is a special device for that, and it’s gpu. The hardware acceleration for the graphics card is collectively enabled by running the following,

$ lxc config device add wine-games mygpu gpu
$ lxc config device set wine-games mygpu uid 1000
$ lxc config device set wine-games mygpu gid 1000

We add the gpu device, and we happen to name it mygpu (any name would suffice). [UPDATED] In addition, we set the uid/gui of the gpu device to 1000 (the default uid/gid of the first non-root account on Ubuntu; adapt accordingly on other distributions). The gpu device has been introduced in LXD 2.7, therefore if it is not found, you may have to upgrade your LXD according to https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Please leave a comment below if this was your case (mention what LXD version you have been running).

Let’s see what we got now.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ export DISPLAY=:0
ubuntu@wine-games:~$ xclock

ubuntu@wine-games:~$ glxinfo 
name of display: :0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
...
ubuntu@wine-games:~$ echo "export DISPLAY=:0" >> ~/.profile 
ubuntu@wine-games:~$ exit
$

Looks good, we are good to go! Note that we edited the ~/.profile file in order to set the $DISPLAY variable automatically whenever we connect to the container.

Installing Wine

We install Wine in the container according to the instructions at https://wiki.winehq.org/Ubuntu.

$ lxc exec wine-games -- sudo --login --user ubuntu
ubuntu@wine-games:~$ sudo dpkg --add-architecture i386 
ubuntu@wine-games:~$ wget https://dl.winehq.org/wine-builds/Release.key
--2017-05-01 21:30:14--  https://dl.winehq.org/wine-builds/Release.key
Resolving dl.winehq.org (dl.winehq.org)... 151.101.112.69
Connecting to dl.winehq.org (dl.winehq.org)|151.101.112.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3122 (3.0K) [application/pgp-keys]
Saving to: ‘Release.key’

Release.key                100%[=====================================>]   3.05K  --.-KB/s    in 0s      

2017-05-01 21:30:15 (24.9 MB/s) - ‘Release.key’ saved [3122/3122]

ubuntu@wine-games:~$ sudo apt-key add Release.key
OK
ubuntu@wine-games:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine-games:~$ sudo apt-get update
...
Reading package lists... Done
ubuntu@wine-games:~$ sudo apt-get install --install-recommends winehq-devel
...
Need to get 115 MB of archives.
After this operation, 715 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
ubuntu@wine-games:~$

715MB?!? Sure, bring it on. Whatever is installed in the container, stays in the container! 🙂

Let’s run a game in the container

Here is a game that looks good for our test, Season Match 4. Let’s play it.

ubuntu@wine-games:~$ wget http://cdn.gametop.com/free-games-download/Season-Match4.exe
ubuntu@wine-games:~$ wine Season-Match4.exe 
...
ubuntu@wine-games:~$ cd .wine/drive_c/Program\ Files\ \(x86\)/GameTop.com/Season\ Match\ 4/
ubuntu@wine-games:~/.wine/drive_c/Program Files (x86)/GameTop.com/Season Match 4$ wine SeasonMatch4.exe

Here is the game, and it works.It runs full screen and it is a bit weird to navigate between windows. The animations though are smooth.

We did not set up sound either in this post, nor did we make nice shortcuts so that we can run these apps with a single click. That’s material for a future tutorial!

post image

A closer look at the new ARM64 Scaleway servers and LXD

Update #1: I posted at the Scaleway Linux kernel discussion thread to add support for the Ubuntu Linux kernel and Add new bootscript with stock Ubuntu Linux kernel #349.

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs-0.6.5.9-1.aarch64.rpm is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"
upstream="${release%%-*}"
local="${release#*-}"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/spl-0.6.5.9.tar.gz
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-0.6.5.9/zfs-0.6.5.9.tar.gz

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-0.6.5.9.tar.gz
cd spl-0.6.5.9/
./autogen.sh
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-0.6.5.9/
tar xvfa zfs-0.6.5.9.tar.gz
./autogen.sh
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
ldconfig
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket

LXD has been successfully configured.
root@scw-ubuntu-arm64:~#

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

real    5m25.717s
user    5m25.460s
sys    0m0.372s
ubuntu@scw-ubuntu-arm64:~$

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail
2446
ubuntu@scw-ubuntu-arm64:~$

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s
ubuntu@scw-ubuntu-arm64:~$

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s
ubuntu@scw-ubuntu-arm64:~$

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default

Log:

            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.

ubuntu@scw-ubuntu-arm64:~$

Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+------+------+------------+-----------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+------+------+------------+-----------+
| c1   | RUNNING |      |      | PERSISTENT | 0         |
+------+---------+------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$ lxc list
+------+---------+-----------------------+------+------------+-----------+
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-----------------------+------+------------+-----------+
| c1   | RUNNING | 10.237.125.217 (eth0) |      | PERSISTENT | 0         |
+------+---------+-----------------------+------+------------+-----------+
ubuntu@scw-ubuntu-arm64:~$

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
...
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
...
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl http://10.237.125.217/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
ubuntu@scw-ubuntu-arm64:~$

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

post image

How to initialize LXD again

LXD is the pure-container hypervisor that is pre-installed in Ubuntu 16.04 (or newer) and also available in other GNU/Linux distributions.

When you first configure LXD, you need to make important decisions. Decisions that relate to where you are storing the containers, how big that space will be and also how to set up networking.

In this post we are going to see how to properly clean up LXD with the aim to initialize it again (lxd init).

If you haven’t used LXD at all, have a look at how to set up LXD on your desktop and come back in order to reinitialize together.

Before initializing again, let’s have a look as to what is going on on our system.

What LXD packages have we got installed?

LXD comes in two packages, the lxd package for the hypervisor and the lxd-client for the client utility. There is an extra package, lxd-tools, however this one is not essential at all.

Let’s check which versions we have installed.

$ apt policy lxd lxd-client
lxd:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
lxd-client:
  Installed: 2.0.9-0ubuntu1~16.04.2
  Candidate: 2.0.9-0ubuntu1~16.04.2
  Version table:
 *** 2.0.9-0ubuntu1~16.04.2 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
        500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
        500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
$ _

I am running Ubuntu 16.04 LTS, currently updated to 16.04.2. The current version of the LXD package is 2.0.9-0ubuntu1~16.04.2. You can see that there is an older version, which was a security update. And an even older version, version 2.0.0, which was the initial version that Ubuntu 16.04 was released with.

There is a PPA that has even more recent versions of LXD (currently at version 2.11), however as it is shown above, we do not have that one enabled here.

We will be uninstalling in a bit those two packages. There is an option to simply uninstall but also to uninstall with –purge. We need to figure out what LXD means in terms of installed files, in order to select whether to purge or not.

How are the containers stored and where are they located?

The containers can be stored either

  1. in subdirectories on the root (/) filesystem. Located at /var/lib/lxd/containers/ You get this when you configure LXD to use the dir storage backend.
  2. in a loop file that is formatted internally with the ZFS filesystem. Located at /var/lib/lxd/containers/zfs.img (or /var/lib/lxd/containers/disks/ in newer versions). You get this when you configure LXD to use the zfs storage backend (on a loop file and not a block device).
  3. in a block device (partition) that is formatted with ZFS (or btrfs). You get this when you configure LXD to use the zfs storage backend (not on a loop file but on a block device).

Let’s see all three cases!

In the following we assume we have a container called mytest, which is running.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.166 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+

Let’s see how it looks depending on the type of the storage backend.

Storage backend: dir

Let’s see the config!

$ lxc config show
config: {}
$ _

We are looking for configuration that refers to storage. We do not see any, therefore, this installation uses the dir storage backend.

Where are the files for the mytest container stored?

$ sudo ls -l /var/lib/lxd/containers/
total 8
drwxr-xr-x+ 4 165536 165536 4096 Μάρ  15 23:28 mytest
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 12
-rw-r--r--  1 root   root   1566 Μάρ   8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536 4096 Μάρ  15 23:28 rootfs
drwxr-xr-x  2 root   root   4096 Μάρ   8 05:16 templates
$ _

Each container can be find in /var/lib/lxd/containers/, in a subdirectory with the same name as the container name.

Inside there, in the rootfs/ directory we can find the filesystem of the container.

Storage backend: zfs

Let’s see how the config looks like!

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$

Okay, we are using ZFS for the storage backend. It is not clear yet whether we are using a loop file or a block device. How do we find that? With zpool status.

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME                    STATE     READ WRITE CKSUM
    lxd                     ONLINE       0     0     0
      /var/lib/lxd/zfs.img  ONLINE       0     0     0

errors: No known data errors

In the above example, the ZFS filesystem is stored in a loop file, located at /var/lib/lxd/zfs.img

However, in the following example,

$ sudo zpool status
  pool: lxd
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    lxd         ONLINE       0     0     0
      sda8      ONLINE       0     0     0

errors: No known data errors

the ZFS filesystem is located in a block device, in /dev/sda8.

Here is how the container files look like with ZFS (either on a loop file or on a block device),

$ sudo ls -l /var/lib/lxd/containers/
total 5
lrwxrwxrwx 1 root   root     34 Mar 15 23:43 mytest -> /var/lib/lxd/containers/mytest.zfs
drwxr-xr-x 4 165536 165536    5 Mar 15 23:43 mytest.zfs
$ sudo ls -l /var/lib/lxd/containers/mytest/
total 4
-rw-r--r--  1 root   root   1566 Mar  8 05:16 metadata.yaml
drwxr-xr-x 22 165536 165536   22 Mar 15 23:43 rootfs
drwxr-xr-x  2 root   root      8 Mar  8 05:16 templates
$ mount | grep mytest.zfs
lxd/containers/mytest on /var/lib/lxd/containers/mytest.zfs type zfs (rw,relatime,xattr,noacl)
$ _

How to clean up the storage backend

When we try to run lxd init without cleaning up our storage, we get the following error,

$ lxd init
LXD init cannot be used at this time.
+However if all you want to do is reconfigure the network,
+you can still do so by running "sudo dpkg-reconfigure -p medium lxd"

error: You have existing containers or images. lxd init requires an empty LXD.
$ _

Yep, we need to clean up both the containers and any cached images.

Cleaning up the containers

We are going to list the containers, then stop them, and finally delete them. Until the list is empty.

$ lxc list
+--------+---------+----------------------+------+------------+-----------+
|  NAME  |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+--------+---------+----------------------+------+------------+-----------+
| mytest | RUNNING | 10.177.65.205 (eth0) |      | PERSISTENT | 0         |
+--------+---------+----------------------+------+------------+-----------+
$ lxc stop mytest
$ lxc delete mytest
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

It’s empty now!

Cleaning up the images

We are going to list the cached images, then delete them. Until the list is empty!

$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |          UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
|       | 2cab90c0c342 | no     | ubuntu 16.04 LTS amd64 (release) (20170307) | x86_64 | 146.32MB | Mar 15, 2017 at 10:02pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+
$ lxc image delete 2cab90c0c342
$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
$ _

Clearing up the ZFS storage

If we are using ZFS, here is how we clear up the ZFS pool.

First, we need to remove any reference of the ZFS pool from LXD. We just need to unset the configuration directive storage.zfs_pool_name.

$ lxc config show
config:
  storage.zfs_pool_name: lxd
$ lxc config unset storage.zfs_pool_name
$ lxc config show
config: {}
$ _

Then, we can destroy the ZFS pool.

$ sudo zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
lxd   2,78G   664K  2,78G         -     7%     0%  1.00x  ONLINE  -
$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              544K  2,69G    19K  none
lxd/containers    19K  2,69G    19K  none
lxd/images        19K  2,69G    19K  none
$ sudo zpool destroy lxd
$ sudo zpool list
no pools available
$ sudo zfs list
no datasets available
$ _

Running “lxd init” again

At this point we are able to run lxd init again in order to initialize LXD again.

Common errors

Here is a collection of errors that I encountered when running lxd init. These errors should appear if we did not clean up properly as described earlier in this post.

I had been trying lots of variations, including different versions of LXD. You probably need to try hard to get these errors.

error: Provided ZFS pool (or dataset) isn’t empty

Here is how it looks:

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
error: Provided ZFS pool (or dataset) isn't empty
Exit 1

Whaaaat??? Something is wrong. The ZFS pool is not empty? What’s inside the ZFS pool?

$ sudo zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
lxd              642K  14,4G    19K  none
lxd/containers    19K  14,4G    19K  none
lxd/images        19K  14,4G    19K  none

Okay, it’s just the two volumes that are left over. Let’s erase them!

$ sudo zfs destroy lxd/containers
$ sudo zfs destroy lxd/images
$ sudo zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
lxd    349K  14,4G    19K  none
$ _

Nice! Let’s run now lxd init.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.
$ _

That’s it! LXD is freshly configured!

error: Failed to create the ZFS pool: cannot create ‘lxd’: pool already exists

Here is how it looks,

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? yes
Path to the existing block device: /dev/sdb9 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Failed to create the ZFS pool: cannot create 'lxd': pool already exists
$ _

Here we forgot to destroy the ZFS pool called lxd. See earlier in this post on how to destroy the pool so that lxd init can recreate it.

Permission denied, are you in the lxd group?

This is a common error when you first install the lxd package because your non-root account needs to log out and log in again in order to enable the membership to the lxd Unix group.

However, we got this error when we were casually uninstalling and reinstalling the lxd package, and doing nasty tests. Let’s see more details.

$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ groups myusername
myusername : myusername adm cdrom sudo plugdev lpadmin lxd
$ newgrp lxd
$ lxc list
Permission denied, are you in the lxd group?
Exit 1
$ _

Whaaat!?! Permission denied and we are asked whether we are in the lxd group? We are members of the lxd group!

Well, the problem is whether the Unix socket that allows non-root users (members of the lxd Unix group) to access LXD has proper ownership.

$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root root 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ sudo chown :lxd /var/lib/lxd/unix.socket 
$ ls -l /var/lib/lxd/unix.socket 
srw-rw---- 1 root lxd 0 Mar 15 23:20 /var/lib/lxd/unix.socket
$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
$ _

The group of the Unix socket /var/lib/lxd/unix.socket was not set to the proper value lxd, therefore we set it ourselves. And then the LXD commands work just fine with our non-root user account!

error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open ‘lxd’: dataset does not exist

Here is a tricky error.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: lxd2
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
$ _

We cleaned up the ZFS pool just fine and we are running lxd init. But we got an error relating to the lxd pool that is already gone. Whaat?!?

What happened is that in this case, we forgot to FIRST unset the configuration option in LXD regarding the ZFS pool. We just forget to run lxc config unset storage.zfs_pool_name.

It’s fine then, let’s unset it now and go on with life.

$ lxc config unset storage.zfs_pool_name
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
Exit 1
$ _

Alright, we really messed up!

There are two ways to move forward. One, to rm -fr /var/lib/lxd/ and start over.

The other way is to edit the /var/lib/lxd/lxd.db Sqlite3 file and change the configuration setting from there. Here is how it works,

First, install the sqlitebrowser package and run sudo sqlitebrowser /var/lib/lxd/lxd.db

Second, get to the config table in sqlitebrowser as shown below.

Third, double-click on the value field (which as shown, says lxd) and clear it so it is shown as empty.

Fourth, click on File→Close Database and select to save the database. Let’s see now!

$ lxc config show
config:
  storage.zfs_pool_name: lxd

What?

Fifth, we need to start the LXD service so that LXD will read again the configuration.

$ sudo systemctl restart lxd.service
$ lxc config show
config: {}
$ _

That’s it! We are good to go!

post image

How to install neo4j in an LXD container on Ubuntu

Neo4j is a different type of database, is a graph database. It is quite cool and it is worth to spend the time to learn how it works.

The main benefit of a graph database is that the information is interconnected as a graph, and allows to execute complex queries very quickly.

One of the sample databases in Neo4j is (big part of) the content of IMDb.com (the movie database). Here is the description of some possible queries:

  • Find actors who worked with Gene Hackman, but not when he was also working with Robin Williams in the same movie.
  • Who are the five busiest actors?
  • Return the count of movies in which an actor and director have jointly worked

In this post

  1. we install Neo4j in an LXD container on Ubuntu (or any other GNU/Linux distribution that has installation packages)
  2. set it up so we can access Neo4j from our Ubuntu desktop browser
  3. start the cool online tutorial for Neo4j, which you can complete on your own
  4. remove the container (if you really wish!) in order to clean up the space

Creating an LXD container

See Trying out LXD containers on our Ubuntu in order to make the initial (one-time) configuration of LXD on your Ubuntu  desktop.

Then, let’s start with creating a container for neo4j:

$ lxc launch ubuntu:x neo4j
Creating neo4j
Starting neo4j
$ _

Here we launched a container named neo4j, that runs Ubuntu 16.04 (Xenial, hence ubuntu:x).

Let’s see the container details:

$ lxc list
+----------+---------+----------------------+------+------------+-----------+
| NAME     | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
| neo4j    | RUNNING | 10.60.117.91 (eth0)  |      | PERSISTENT | 0         |
+----------+---------+----------------------+------+------------+-----------+
$ _

It takes a few seconds for a new container to launch. Here, the container is in the RUNNING state, and also has a private IP address. It’s good to go!

Connecting to the LXD container

Let’s get a shell in the new neo4j LXD container.

$ lxc exec neo4j -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@neo4j:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] 
Get:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] 
Get:5 http://archive.ubuntu.com/ubuntu xenial/main Sources [868 kB] 
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main Sources [61.1 kB]
Get:7 http://security.ubuntu.com/ubuntu xenial-security/restricted Sources [2,288 B]
Get:8 http://archive.ubuntu.com/ubuntu xenial/restricted Sources [4,808 B] 
Get:9 http://archive.ubuntu.com/ubuntu xenial/universe Sources [7,728 kB] 
Get:10 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [20.9 kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/multiverse Sources [1,148 B]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [219 kB]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [92.0 kB]
Get:14 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [79.1 kB]
Get:15 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [43.9 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/multiverse Sources [179 kB] 
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main Sources [234 kB] 
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2,688 B] 
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/universe Sources [134 kB] 
Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse Sources [4,556 B] 
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [485 kB] 
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [193 kB] 
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [411 kB] 
Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [155 kB] 
Get:25 http://archive.ubuntu.com/ubuntu xenial-backports/main Sources [3,200 B] 
Get:26 http://archive.ubuntu.com/ubuntu xenial-backports/universe Sources [1,868 B] 
Get:27 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [4,672 B] 
Get:28 http://archive.ubuntu.com/ubuntu xenial-backports/main Translation-en [3,200 B] 
Get:29 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [2,512 B] 
Get:30 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [1,216 B] 
Fetched 11.2 MB in 8s (1,270 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
2 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@neo4j:~$ exit
logout
$ _

The command we used to get a shell is this, sudo –login –user ubuntu

We instructed to execute (exec) in the neo4j container the command that appears after the separator.

The images for the LXD containers have both a root account and also a user account, in the Ubuntu images is called ubuntu. Both accounts are locked (no default password is available). lxc exec runs the commands in the containers as root, therefore, the sudo –login –user ubuntu command would obviously run without asking passwords. This sudo command creates a login shell for the specified user, user ubuntu.

Once we are connected to the container as user ubuntu, we can then run commands as root simply by using sudo in front of them. Since user ubuntu is in /etc/sudoers, no password is asked. That is the reason why sudo apt update was ran earlier without asking a password.

The Ubuntu LXD containers auto-update themselves by running unattended-upgrade, which means that we do not need to run sudo apt upgrade. We do run sudo apt update just to get an updated list of available packages and avoid any errors when installing, just because the package list was changed recently.

After we updated the package list, we exit the container with exit.

Installing neo4j

This is the download page for Neo4j, https://neo4j.com/download/ and we click to get the community edition.

We download Neo4j (the Linux(tar)) on our desktop. When we tried this, version 3.1.1 was available.

We downloaded the file and it can be found in ~/Downloads/ (or the localized name). Let’s copy it to the container,

$ cd ~Downloads/
$ ls -l neo4j-community-3.1.1-unix.tar.gz 
-rw-rw-r-- 1 user user 77401077 Mar 1 16:04 neo4j-community-3.1.1-unix.tar.gz
$ lxc file push neo4j-community-3.1.1-unix.tar.gz neo4j/home/ubuntu/
$ _

The tarball is about 80MB and we use lxc file push to copy it inside the neo4j container, in the directory /home/ubuntu/. Note that neo4j/home/ubuntu/ ends with a / character to specify that it is a directory. If you omit this, you get an error.

Let’s deal with the tarball inside the container,

$ lxc exec neo4j -- sudo --login --user ubuntu
ubuntu@neo4j:~$ ls -l
total 151401
-rw-rw-r-- 1 ubuntu ubuntu 77401077 Mar  1 12:00 neo4j-community-3.1.1-unix.tar.gz
ubuntu@neo4j:~$ tar xfvz neo4j-community-3.1.1-unix.tar.gz 
neo4j-community-3.1.1/
neo4j-community-3.1.1/bin/
neo4j-community-3.1.1/data/
neo4j-community-3.1.1/data/databases/

[...]
neo4j-community-3.1.1/lib/mimepull-1.9.3.jar
ubuntu@neo4j:~$

The files are now in the container, let’s run this thing!

Running Neo4j

The commands to manage Neo4j are in the bin/ subdirectory,

ubuntu@neo4j:~$ ls -l neo4j-community-3.1.1/bin/
total 24
-rwxr-xr-x 1 ubuntu ubuntu 1624 Jan  5 12:03 cypher-shell
-rwxr-xr-x 1 ubuntu ubuntu 7454 Jan 17 17:52 neo4j
-rwxr-xr-x 1 ubuntu ubuntu 1180 Jan 17 17:52 neo4j-admin
-rwxr-xr-x 1 ubuntu ubuntu 1159 Jan 17 17:52 neo4j-import
-rwxr-xr-x 1 ubuntu ubuntu 5120 Jan 17 17:52 neo4j-shared.sh
-rwxr-xr-x 1 ubuntu ubuntu 1093 Jan 17 17:52 neo4j-shell
drwxr-xr-x 2 ubuntu ubuntu    4 Mar  1 12:02 tools
ubuntu@neo4j:~$

According to Running Neo4j, we need to run “neo4j start“. Let’s do it.

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
ERROR: Unable to find Java executable.
* Please use Oracle(R) Java(TM) 8, OpenJDK(TM) or IBM J9 to run Neo4j Server.
* Please see http://docs.neo4j.org/ for Neo4j Server installation instructions.
ubuntu@neo4j:~$

We need Java, and the documentation actually said so. We just need the headless JDK, since we are accessing the UI from our browser.

ubuntu@neo4j:~$ sudo apt install default-jdk-headless
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 ca-certificates-java default-jre-headless fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1 libfreetype6 libjpeg-turbo8 libjpeg8
 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libpcsclite1 libxi6 libxrender1 libxtst6 openjdk-8-jdk-headless openjdk-8-jre-headless x11-common
[...]
Setting up default-jdk-headless (2:1.8-56ubuntu2) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
Processing triggers for systemd (229-4ubuntu16) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@neo4j:~$

Now, we are ready to start Neo4j!

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j start
Starting Neo4j.
Started neo4j (pid 4123). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
nohup: ignoring input
2017-03-01 12:22:41.060+0000 INFO  No SSL certificate found, generating a self-signed certificate..
2017-03-01 12:22:41.517+0000 INFO  Starting...
2017-03-01 12:22:41.914+0000 INFO  Bolt enabled on localhost:7687.
2017-03-01 12:22:43.622+0000 INFO  Started.
2017-03-01 12:22:44.375+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$

So, Neo4j is running just fine, but it has been bound on the localhost which makes it inaccessible to our desktop browser. Let’s verify again,

ubuntu@neo4j:~$ sudo lsof -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4123 ubuntu  210u  IPv6 121981      0t0  TCP localhost:7687 (LISTEN)
java     4123 ubuntu  212u  IPv6 121991      0t0  TCP localhost:7474 (LISTEN)
java     4123 ubuntu  220u  IPv6 121072      0t0  TCP localhost:7473 (LISTEN)
ubuntu@neo4j:~$

What we actually need, is for Neo4j to bind to all network interfaces so that it becomes accessible to our desktop browser. Being in a container, the all network interfaces means to bind the only other interface, the private network that LXD created for us:

ubuntu@neo4j:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:16:3e:48:b7:85  
          inet addr:10.60.117.21  Bcast:10.60.117.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe48:b785/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30029 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17553 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:51824701 (51.8 MB)  TX bytes:1193503 (1.1 MB)
ubuntu@neo4j:~$

Where do we look in the configuration files of Neo4j to get it to bind to all network interfaces?

We look at the Neo4j documentation on Configuring the connectors, and we see that we need to edit the configuration file neo4j-community-3.1.1/conf/neo4j.conf

We can see that there is an overall configuration parameter for the connectors, and we can set the default_listen_address to 0.0.0.0. This 0.0.0.0 in networking terms means that we want the process to bind to all networking interfaces. Let’s remind us here that in our case of LXD containers that reside in private networking, this is OK.

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
#dbms.connectors.default_listen_address=0.0.0.0

to

# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0

Let’s restart Neo4j and check whether it looks OK:

ubuntu@neo4j:~$ neo4j-community-3.1.1/bin/neo4j restart
Stopping Neo4j.. stopped
Starting Neo4j.
Started neo4j (pid 4711). By default, it is available at http://localhost:7474/
There may be a short delay until the server is ready.
See /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log for current status.
ubuntu@neo4j:~$ tail /home/ubuntu/neo4j-community-3.1.1/logs/neo4j.log
2017-03-01 12:57:52.839+0000 INFO  Started.
2017-03-01 12:57:53.624+0000 INFO  Remote interface available at http://localhost:7474/
2017-03-01 13:01:58.566+0000 INFO  Neo4j Server shutdown initiated by request
2017-03-01 13:01:58.575+0000 INFO  Stopping...
2017-03-01 13:01:58.620+0000 INFO  Stopped.
nohup: ignoring input
2017-03-01 13:02:00.088+0000 INFO  Starting...
2017-03-01 13:02:01.310+0000 INFO  Bolt enabled on 0.0.0.0:7687.
2017-03-01 13:02:02.928+0000 INFO  Started.
2017-03-01 13:02:03.599+0000 INFO  Remote interface available at http://localhost:7474/
ubuntu@neo4j:~$ sudo lsof -n -i
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
dhclient  225   root    6u  IPv4 110811      0t0  UDP *:bootpc 
sshd      328   root    3u  IPv4 111079      0t0  TCP *:ssh (LISTEN)
sshd      328   root    4u  IPv6 111081      0t0  TCP *:ssh (LISTEN)
java     4711 ubuntu  210u  IPv6 153406      0t0  TCP *:7687 (LISTEN)
java     4711 ubuntu  212u  IPv6 153415      0t0  TCP *:7474 (LISTEN)
java     4711 ubuntu  220u  IPv6 153419      0t0  TCP *:7473 (LISTEN)
ubuntu@neo4j:~$

The log messages still say that Neo4j is accessible at http://localhost:7474/, which is factually correct. However, lsof shows us that it is boung to all network interfaces (the * means all).

Loading up Neo4j in the browser

We know already that in our case, the private IP address of the neo4j LXD container is 10.60.117.21. Let’s visit http://10.60.117.21:7474/ on our desktop Web browser!

It works! It asks us to log in using the default username neo4j with the default password neo4j. Then, it will ask us to change the password to something else and we are presented with the initial page of Neo4j,

The $ ▊ prompt is there for you to type instructions. According to the online tutorial at https://neo4j.com/graphacademy/online-training/introduction-graph-databases/

you can start the tutorial if you type :play movie graph and press the Run button. Therefore, load on one browser tab the tutorial and on other tab run the commands in the neo4j server from the LXD container!

Once you are done

Once you have completed the tutorial, you can keep the container in order to try out more tutorials and learn more about neo4j.

However, if you want to remove this LXD container, it can be done by running:

$ lxc stop neo4j
$ lxc delete neo4j
$ lxc list
+----------+---------+----------------------+------+------------+-----------+
|   NAME   |  STATE  |         IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+----------+---------+----------------------+------+------------+-----------+
+----------+---------+----------------------+------+------------+-----------+
$ _

That’s it. The container is gone and LXD is ready for you to follow more LXD tutorials and create more containers!

post image

How to create a snap for timg with snapcraft on Ubuntu

In this post we are going to see how to create a snap for a utility called timg. If this is the very first time you heard about snap installation packages, see How to create your first snap.

Today we learn the following items about creating snaps with snapcraft,

  • the source of timg comes with a handmade Makefile, and requires to tinker a bit the make plugin parameters.
  • the program is written in C and requires a couple of external libraries. We make sure their relevant code has been added to the snap.
  • should we select strict confinement or classic confinement? We discuss how to choose between these two.

So, what does this timg do?

Background

The Linux terminal emulators have become quite cool and they have colors!

Apart from the standard colors, most terminal emulators (like the GNOME Terminal depicted above) support true color (16 million colors!).

Yes! True color in the terminal emulators! Get the AWK code from the page True Colour (16 million colours) support in various terminal applications and terminals
and test it yourself. You may notice in the code that it uses some escape sequences to specify the RGB value (256*256*256 ~= 16 million colors).

What is timg?

Alright, back to the point of the post. What does timg do? Well, it takes an image as input and resizes it down to the character dimensions of your terminal window (for example, 80×25). And shows the image in color, just by using characters in color, in whatever resolution the terminal window is in!

This one is the Ubuntu logo, a PNG image file, shown as colored block characters.

And here is a flower, by @Doug8888.

timg would be especially helpful if you are connecting remotely to your server, minding your own business, and want to see what an image file looks like. Bam!

Apart from static images, timg can also display animated gifs! Let’s start snapping!

Getting familiar with the timg source

The source of timg can be found at https://github.com/hzeller/timg Let’s try to compile it manually in order to get an idea what requirements it may have.

The Makefile is in the src/ subdirectory and not in the root directory of the project. At the github page they say to install these two development packages (GraphicsMagic++, WebP) and then make would simply work and generate the executable. In the screenshot I show for brevity that I have them already installed (after I read the Readme.md file about this).

Therefore, four mental notes when authoring the snapcraft.yaml file:

  1. The Makefile is in a subdirectory, src/, and not in the root directory of the project.
  2. The program requires two development libraries in order to compile.
  3. In order for timg to run as a snap, we need to somehow bundle these two libraries inside the snap (or link them statically).
  4. timg is written in C++ and requires g++. Let’s instruct the snapcraft.yaml file to check that the build-essential metapackage is installed before compiling.

Starting off with snapcraft

Let’s create a directory timg-snap/, and run snapcraft init in there in order to create a skeleton snapcraft.yaml to work on.

ubuntu@snaps:~$ mkdir timg-snap
ubuntu@snaps:~$ cd timg-snap/
ubuntu@snaps:~/timg-snap$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml 
name: my-snap-name # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: |
  This is my-snap's description. You have a paragraph or two to tell the
  most important story about your snap. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the snap
  store.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

Filling in the metadata

The upper part of a snapcraft.yaml configuration file is the metadata. We fill them in in one go, and they are the easy part. The metadata consist of the following fields

  1. name, the name of the snap as it will be known publicly at the Ubuntu Store.
  2. version, the version of the snap. Can be an appropriate branch or tag in the source code repository, or the current date if no branch or tag exist.
  3. summary, a short description under 80 characters.
  4. description, a bit longer description, under 100 words.
  5. grade, either stable or devel. We want to release the snap in the stable channel of the Ubuntu Store. We make sure the snap works, and write stable here.
  6. confinement, initially we put devmode so as not to restrict the snap in any way. Once it works as devmode, we later consider whether to select strict or classic confinement.

For the name, we are going to use timg,

ubuntu@snaps:~/timg-snap$ snapcraft register timg
Registering timg.
You already own the name 'timg'.

Yeah, I registered the name the other day :-).

Next, which version of timg?

When we look for a branch or a tag in the repository, we find that there is a v0.9.5 tag, with the latest commit in 27th Jun 2016.

However, in the main branch (master) there are two commits that look significant. Therefore, instead of using tag v0.9.5, we are snapping the master branch. We are using today’s date as the version, 20170226.

We glean the summary and description from the repository. Therefore, the summary is A terminal image viewer, and the description is A viewer that uses 24-Bit color capabilities and unicode character blocks to display images in the terminal.

Finally, the grade will be stable and the confinement will be devmode (initially, until the snap actually works).

Here is the updated snapcraft.yaml with the completed metadata section:

ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml 
name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: devmode

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

Figuring out the “parts:”

This is the moment where we want to replace the stub parts: section shown above with a real parts:.

We know the URL to the git repository and we have seen that there is an existing Makefile. The existing Makefile commands for the make Snapcraft plugin. As the documentation says, This plugin always runs ‘make’ followed by ‘make install’. In order to identify the make plugin, we went through the list of available Snapcraft plugins.

Therefore, we initially change from the stub

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

to

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    plugin: make

Here is the current snapcraft.yaml,

name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: devmode

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    plugin: make

Let’s run the snapcraft prime command and see what happens!

ubuntu@snaps:~/timg-snap$ snapcraft prime
Preparing to pull timg 
Pulling timg 
Cloning into '/home/ubuntu/timg-snap/parts/timg/src'...
remote: Counting objects: 144, done.
remote: Total 144 (delta 0), reused 0 (delta 0), pack-reused 144
Receiving objects: 100% (144/144), 116.00 KiB | 0 bytes/s, done.
Resolving deltas: 100% (89/89), done.
Checking connectivity... done.
Preparing to build timg 
Building timg 
make -j4
make: *** No targets specified and no makefile found.  Stop.
Command '['/bin/sh', '/tmp/tmpem97fh9d', 'make', '-j4']' returned non-zero exit status 2
ubuntu@snaps:~/timg-snap$

snapcraft could not find a Makefile in the source and as we hinted earlier, the Makefile is only located inside the src/ subdirectory. Can we instruct snapcraft to look into src/ of the source for the Makefile?

Each snapcraft plugin has its own options, and there are general shared options that relate to all plugins. In this case, we want to look into the snapcraft options relating to the source code. Here we go,

 

  • source-subdir: pathSnapcraft will checkout the repository or unpack the archive referred to by the source keyword into parts/<part-name>/src/ but it will only copy the specified subdirectory into parts/<part-name>/build/

We have the appropriate option, let’s update the parts:

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make

And run snapcraft again!

ubuntu@snaps:~/timg-snap$ snapcraft prime 
The 'pull' step of 'timg' is out of date:

The 'source-subdir' part property appears to have changed.

Please clean that part's 'pull' step in order to continue
ubuntu@snaps:~/timg-snap$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
ubuntu@snaps:~/timg-snap$ snapcraft prime 
Skipping pull timg (already ran)
Preparing to build timg 
Building timg 
make -j4
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
g++ -Wall -O3 -fPIC   -c -o terminal-canvas.o terminal-canvas.cc
/bin/sh: 1: GraphicsMagick++-config: not found
timg.cc:33:22: fatal error: Magick++.h: No such file or directory
compilation terminated.
Makefile:10: recipe for target 'timg.o' failed
make: *** [timg.o] Error 1
make: *** Waiting for unfinished jobs....
Command '['/bin/sh', '/tmp/tmpeeyxj5kw', 'make', '-j4']' returned non-zero exit status 2
ubuntu@snaps:~/timg-snap$

Here it tells us that it cannot find the development library GraphicsMagick++. According to the Snapcraft common keywords, we need to specify this development library so that Snapcraft can install it:

 

  • build-packages: [deb, deb, deb…]A list of Ubuntu packages to install on the build host before building the part. The files from these packages typically will not go into the final snap unless they contain libraries that are direct dependencies of binaries within the snap (in which case they’ll be discovered via ldd), or they are explicitly described in stage-packages.

Therefore, let’s find the name of the development package,

ubuntu@snaps:~/timg-snap$ apt-cache search graphicsmagick++ | grep dev
graphicsmagick-libmagick-dev-compat/xenial 1.3.23-1build1 all
libgraphicsmagick++1-dev/xenial 1.3.23-1build1 amd64
  format-independent image processing - C++ development files
libgraphicsmagick1-dev/xenial 1.3.23-1build1 amd64
  format-independent image processing - C development files
ubuntu@snaps:~/timg-snap$

The package name is a lib + graphicsmagick++ and ends with -dev. We got it! Here is the updated parts:

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev

Let’s run snapcraft prime again!

ubuntu@snaps:~/timg-snap$ snapcraft 
Installing build dependencies: libgraphicsmagick++1-dev
[...]
The following NEW packages will be installed:
  libgraphicsmagick++-q16-12 libgraphicsmagick++1-dev libgraphicsmagick-q16-3
  libgraphicsmagick1-dev libwebp5
[...]
Building timg 
make -j4
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
g++ -Wall -O3 -fPIC   -c -o terminal-canvas.o terminal-canvas.cc
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
/usr/bin/ld: cannot find -lwebp
collect2: error: ld returned 1 exit status
Makefile:7: recipe for target 'timg' failed
make: *** [timg] Error 1
Command '['/bin/sh', '/tmp/tmptma45jzl', 'make', '-j4']' returned non-zero exit status 2
ubuntu@snaps:~/timg-snap$

Simply by specifying the development library libgraphicsmagick++1-dev, Ubuntu also installs the code libraries, including libgraphicsmagick++-q16-12, along with the (dynamic) code library libwebp.

We still got an error, and the error is related to the webp library. The development version of the webp library (a static library) is missing. Let’s find it!

ubuntu@snaps:~/timg-snap$ apt-cache search libwebp | grep dev
libwebp-dev - Lossy compression of digital photographic images.
ubuntu@snaps:~/timg-snap$

Bingo! The libwebp5 package that was installed further up, provides only a dynamic (.so) library. By requiring the libwebp-dev package, we get the static (.a) library as well. Let’s update the parts: section!

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages:
      - libgraphicsmagick++1-dev
      - libwebp-dev

Here is the updated snapcraft.yaml file,

name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: devmode

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev
      - libwebp-dev

Let’s run snapcraft!

ubuntu@snaps:~/timg-snap$ snapcraft prime
Skipping pull timg (already ran)
Preparing to build timg 
Building timg 
make -j4
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
g++ -Wall -O3 -fPIC   -c -o terminal-canvas.o terminal-canvas.cc
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
make install DESTDIR=/home/ubuntu/timg-snap/parts/timg/install
install timg /usr/local/bin
install: cannot create regular file '/usr/local/bin/timg': Permission denied
Makefile:13: recipe for target 'install' failed
make: *** [install] Error 1
Command '['/bin/sh', '/tmp/tmptq_s1itc', 'make', 'install', 'DESTDIR=/home/ubuntu/timg-snap/parts/timg/install']' returned non-zero exit status 2
ubuntu@snaps:~/timg-snap$

We have a new problem. The Makefile was hand-crafted and does not obey the parameters of the Snapcraft make plugin to install into the prime/ directory. The Makefile tries to install in /usr/local/bin/ !

We need to instruct the Snapcraft make plugin not to run make install but instead pick the generated executable timg and place it into the prime/ directory. According to the documentation,

- artifacts:
 (list)
 Link/copy the given files from the make output to the snap
 installation directory. If specified, the 'make install'
 step will be skipped.

So, we need to put something in artifacts:. But what?

ubuntu@snaps:~/timg-snap/parts/timg$ ls build/src/
Makefile            terminal-canvas.h  timg*     timg.o
terminal-canvas.cc  terminal-canvas.o  timg.cc
ubuntu@snaps:~/timg-snap/parts/timg$

In the build/ subdirectory we can find the make output. Since we specified source-subdir: src, our base directory for artifacts: is build/src/. And in there we can find the executable timg, which will be the parameter for artifacts:. With artifacts: we specify the files from the make output that will be copied to the snap installation directory (in prime/).

Here is the updated parts: of snapcraft.yaml,

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev
      - libwebp-dev
    artifacts: [timg]

Let’s run snapcraft prime!

ubuntu@snaps:~/timg-snap$ snapcraft prime
Preparing to pull timg 
Pulling timg 
Cloning into '/home/ubuntu/timg-snap/parts/timg/src'...
remote: Counting objects: 144, done.
remote: Total 144 (delta 0), reused 0 (delta 0), pack-reused 144
Receiving objects: 100% (144/144), 116.00 KiB | 207.00 KiB/s, done.
Resolving deltas: 100% (89/89), done.
Checking connectivity... done.
Preparing to build timg 
Building timg 
make -j4
g++ `GraphicsMagick++-config --cppflags --cxxflags` -Wall -O3 -fPIC -c -o timg.o timg.cc
g++ -Wall -O3 -fPIC   -c -o terminal-canvas.o terminal-canvas.cc
g++ -o timg timg.o terminal-canvas.o `GraphicsMagick++-config --ldflags --libs`
Staging timg 
Priming timg 
ubuntu@snaps:~/timg-snap$

We are rolling!

Exposing the command

Up to now, snapcraft generated the executable but did not expose a command for the users to run. We need to expose a command and this is done in the apps: section.

First of all, where is the command located in the prime/ subdirectory?

ubuntu@snaps:~/timg-snap$ ls prime/
meta/  snap/  timg*  usr/
ubuntu@snaps:~/timg-snap$

It is in the root of the prime/ subdirectory. We are ready to add the apps: section in snapcraft.yaml,

ubuntu@snaps:~/timg-snap$ cat snap/snapcraft.yaml 
name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: devmode

apps:
  timg: 
    command: timg

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev
      - libwebp-dev
    artifacts: [timg]

Let’s run snapcraft prime again and then try the snap!

ubuntu@snaps:~/timg-snap$ snapcraft prime 
Skipping pull timg (already ran)
Skipping build timg (already ran)
Skipping stage timg (already ran)
Skipping prime timg (already ran)
ubuntu@snaps:~/timg-snap$ snap try --devmode prime/
timg 20170226 mounted from /home/ubuntu/timg-snap/prime
ubuntu@snaps:~/timg-snap$

Image source: https://www.flickr.com/photos/mustangjoe/6091603784/

We used snap try –devmode prime/ as a way to enable the snap and try the command. It is an efficient way for testing and avoids the alternative of generating .snap files, installing them, then uninstalling them. With snap try prime/, it uses directly the directory (in this case, prime/) which has the snap content.

Restricting the snap

Up to now, the snap has been running in devmode (developer mode), which is unrestricted. Let’s see how it runs in a confinement,

ubuntu@snaps:~/timg-snap$ snap list
Name           Version   Rev   Developer  Notes
core           16-2      1337  canonical  -
timg           20170226  x1               devmode,try
ubuntu@snaps:~/timg-snap$ snap try --jailmode prime
timg 20170226 mounted from /home/ubuntu/timg-snap/prime
ubuntu@snaps:~/timg-snap$ snap list
Name           Version   Rev   Developer  Notes
core           16-2      1337  canonical  -
timg           20170226  x2               jailmode,try
ubuntu@snaps:~/timg-snap$ timg pexels-photo-149813.jpeg 
Trouble loading pexels-photo-149813.jpeg (Magick: Unable to open file (pexels-photo-149813.jpeg) reported by magick/blob.c:2828 (OpenBlob))
ubuntu@snaps:~/timg-snap$

Here we quickly switch from devmode to jailmode (confinement: strict) without having to touch the snapcraft.yaml file. As expected, timg could not read the image because we did not permit any access to the filesystem.

At this stage we need to make a decision. With jailmode, we can easily specify that a command has access to the files of the user’s $HOME directory, and only there. If an image file is located elsewhere, we can always copy of the $HOME directory and run timg on the copy in $HOME. If we are happy with this, we can set up snapcraft.yaml as follows:

name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: strict

apps:
  timg: 
    command: timg
    plugs: [home]

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev
      - libwebp-dev
    artifacts: [timg]

On the other hand, if we want the timg snap to have read-access to all the filesystem, we can use confinement: classic and be done with it. Here is how snapcraft.yaml would look in that case,

name: timg
version: '20170226'
summary: A terminal image viewer
description: |
  A viewer that uses 24-Bit color capabilities and unicode character blocks 
  to display images in the terminal.
 
grade: stable 
confinement: classic

apps:
  timg: 
    command: timg

parts:
  timg:
    source: https://github.com/hzeller/timg.git
    source-subdir: src
    plugin: make
    build-packages: 
      - libgraphicsmagick++1-dev
      - libwebp-dev
    artifacts: [timg]

In the following we are selecting the option of the strict confinement. Therefore, images should be in the $HOME only.

Packaging and testing

Let’s package the snap, that is, create the .snap file and try it out on a brand-new installation of Ubuntu!

ubuntu@snaps:~/timg-snap$ snapcraft 
Skipping pull timg (already ran)
Skipping build timg (already ran)
Skipping stage timg (already ran)
Skipping prime timg (already ran)
Snapping 'timg' \                                                 
Snapped timg_20170226_amd64.snap
ubuntu@snaps:~/timg-snap$

How do we get a brand new installation of Ubuntu in seconds so that we can test the snap?

See Trying out LXD containers on our Ubuntu and set up LXD on your system. Then, come back here and try the following commands,

$ lxc launch ubuntu:x snaptesting
Creating snaptesting
Starting snaptesting
$ lxc file push timg_20170226_amd64.snap snaptesting/home/ubuntu/
$ lxc exec snaptesting -- sudo su - ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@snaptesting:~$ ls
timg_20170226_amd64.snap
ubuntu@snaptesting:~$ snap install timg_20170226_amd64.snap 
error: access denied (try with sudo)
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap
error: cannot find signatures with metadata for snap "timg_20170226_amd64.snap"
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap --dangerous
error: cannot perform the following tasks:
- Mount snap "core" (1337) ([start snap-core-1337.mount] failed with exit status 1: Job for snap-core-1337.mount failed. See "systemctl status snap-core-1337.mount" and "journalctl -xe" for details.
)
ubuntu@snaptesting:~$ sudo apt install squashfuse
[...]
Setting up squashfuse (0.1.100-0ubuntu1~ubuntu16.04.1) ...
ubuntu@snaptesting:~$ sudo snap install timg_20170226_amd64.snap --dangerous
timg 20170226 installed
ubuntu@snaptesting:~$ wget https://farm7.staticflickr.com/6187/6091603784_d6960c8be2_z_d.jpg
[...]
2017-02-26 22:12:18 (636 KB/s) - ‘6091603784_d6960c8be2_z_d.jpg’ saved [240886/240886]
ubuntu@snaptesting:~$ timg 6091603784_d6960c8be2_z_d.jpg 
[it worked!]
ubuntu@snaptesting:~$

So, we launched an LXD container called snaptesting, then copied in there the .snap file. Then, we connected to the container as a normal user and tried to install the snap. Initially, the installation failed because we need sudo when we install snaps in unprivileged LXD containers. It again failed because the .snap was unsigned (we need the –dangerous parameter). Then, it further failed because we need to install the squashfuse package (not preinstalled in the Ubuntu 16.04 images). Eventually, the snap was installed and we managed to view the image.

It is important to test a snap in a brand new installation of Linux in order to make sure whether we need to stage any code library inside the snap. In this case, static libraries were used and all went well!

Publishing to the Ubuntu Store

Here are the instructions to publish a snap to the Ubuntu Store. We have already published a few snaps in the previous tutorials. For timg, we got confinement: strict, and grade: stable. We are therefore publishing in the stable channel.

$ snapcraft push timg_20170226_amd64.snap 
Pushing 'timg_20170226_amd64.snap' to the store.
Uploading timg_20170226_amd64.snap [                                       ]   0%
Uploading timg_20170226_amd64.snap [=======================================] 100%
Ready to release!|                                                               
Revision 6 of 'timg' created.
$ snapcraft release timg 6 stable
Track    Arch    Series    Channel    Version    Revision
latest   amd64   16        stable     20170226   6
                           candidate  ^          ^
                           beta       0.9.5      5
                           edge       0.9.5      5
The 'stable' channel is now open.

We pushed the .snap file to the Ubuntu Store and we got a revision number 6. Then, we released the timg revision 6 to the stable channel of the Ubuntu Store.

There was no released snap in the candidate channel, therefore, it inherits the package from the stable channel. Hence, the ^ characters.

In previous tests I uploaded some older versions of the snap to the beta and edge channels. These older versions refer to the old tag 0.9.5 of the timg source code.

Let’s knock down the old 0.9.5 by releasing the stable version to the beta and edge channels as well.

$ snapcraft release timg 6 beta
Track    Arch    Series    Channel    Version    Revision
latest   amd64   16        stable     20170226   6
                           candidate  ^          ^
                           beta       20170226   6
                           edge       0.9.5      5
$ snapcraft release timg 6 edge
Track    Arch    Series    Channel    Version    Revision
latest   amd64   16        stable     20170226   6
                           candidate  ^          ^
                           beta       20170226   6
                           edge       20170226   6

Playing with timg

Let’s run timg without parameters,

ubuntu@snaptesting:~$ timg
Expected image filename.
usage: /snap/timg/x1/timg [options] <image> [<image>...]
Options:
    -g<w>x<h>  : Output pixel geometry. Default from terminal 80x48
    -s[<ms>]   : Scroll horizontally (optionally: delay ms (60)).
    -d<dx:dy>  : delta x and delta y when scrolling (default: 1:0).
    -w<seconds>: If multiple images given: Wait time between (default: 0.0).
    -t<seconds>: Only animation or scrolling: stop after this time.
    -c<num>    : Only Animation or scrolling: number of runs through a full cycle.
    -C         : Clear screen before showing image.
    -F         : Print filename before showing picture.
    -v         : Print version and exit.
If both -c and -t are given, whatever comes first stops.
If both -w and -t are given for some animation/scroll, -t takes precedence
ubuntu@snaptesting:~$

Here it says that for the current zoom level of our terminal emulator, our resolution is a mere 80×48.

Let’s zoom out a bit and maximize the GNOME Terminal window.

    -g<w>x<h>  : Output pixel geometry. Default from terminal 635x428

It is a better resolution but I can hardly see the characters because they are too small. Let’s invoke the old command to show this car again.

What you are seeing is the resized image (from 1080p). Looks great, even if it is made of colored text characters!

What next? timg can play animated gifs as well!

$ wget https://m.popkey.co/9b7141/QbAV_f-maxage-0.gif -O JonahHillAmazed.gif$ timg JonahHillAmazed.gif

Try to install the timg snap yourself in order to experience the animated gif! Failing that, watch the asciinema recording (if the video looks choppy, run it a second time), https://asciinema.org/a/dezbe2gpye84e0pjndp8t0pvh

Thanks for reading!

 

 

 

 

 

post image

How to make a snap package for lolcat with snapcraft on Ubuntu

In this post we are going to see how to make snap installation packages for a program called lolcat.

What you need for this tutorial? This tutorial is similar to Snap a python application. If you first follow that tutorial, then this one will reinforce what you have learned and expand to dealing with more programming languages than just Python.

What will you learn?

  1. How to deal with different source code repositories and get into the process of snapping them in a snap (pun intended).
  2. How to use the Snapcraft plugins for Python, golang, Rust and C/C++.
  3. How to deal with confinement decisions and when to select strict confinement or classic confinement.
  4. How to test the quality of an app before releasing to the Ubuntu Store
  5. What is this truecolor terminal emulator thing, what is lolcat and why it is cool (at least to a select few).

True-color terminal emulators

Terminal emulators like GNOME Terminal support the facility to display text in different colors.

You know this already because by default in Ubuntu you can see filenames in different colors, depending on whether they are executable (green), a directory (blue), a symbolic link (cyan) and so on. If you go 20+ years in the past, even the Linux console supported since then 256 colors.

What changed recently in terms of colors in the terminal emulators, is that newer terminal emulators support 16 million colors; what is described as true-color.

Here is an AWK script (of all scripting languages!) that shows a smooth gradient, from red to green to blue. In old terminal emulators and the Linux console, they were escape sequences to specify the colors, and you had to specify them by name. With the recent improvement among terminal emulators, it is now possible to specify the color by RGB value, thus 256*256*256 ~= 16 million different colors. You can read more about this in the article True Colour (16 million colours) support in various terminal applications and terminals. I found that AWK script there in this article. Try it one your Ubuntu as well!

Now, there is this Unix command called cat, which is used to print the contents of a text file in the terminal. cat /etc/passwd would show the contents of /etc/passwd. So, some people wrote an improved cat, called lolcat, that shows the content of files in a rainbow of colors.

In the screenshot above, the utility lolcat (a version we made and call lolcat-go) is used as a filter to a command (snapcraft –help), and colorizes the text that it receives with a nice rainbow of colors.

In practice, the lolcat utility merely sets the color for each character being printed. It starts with a random color and circles around the rainbow, heading diagonally towards the bottom-right. In the screenshot above, you can see the text snapcraft and Usage: in strong blue, and then diagonally (heading to bottom-right), the colors shift to green and then to orange.

Select which repositories to snap

The first implementation of the lolcat rainbow utility was probably by busyloop, written in Ruby. Since then, several others re-implemented lolcat into more programming languages. In this post we are going to snap:

  1. The Python lolcat by tehmaze.
  2. The golang lolcat by cezarsa.
  3. The Rust lolcat by ur0 (a version that uses Rust’s concurrency feature!)
  4. The C lolcat by jaseg (a version optimized for speed!)

Here we can see a) the URL of the source code, the available b) branches and c) tags. This is useful in the next section when we instruct Snapcraft which version of the source code to use.

In terms of versions among the four implementations, the Python lolcat has a recent tag 0.44, therefore we are using this tag and specify a version 0.44. For the rest, we are using the latest (master) checkout of their repositories, so we use the current date as a version number, in the form YYYYMMDD (for example, 20170226).

Completing the snapcraft.yaml metadata

When creating a snap, we need to write up a nice snap summary (less than 80 characters) and a description (less than 100 words). The summary will look like lolcat-python utility written in Python, and the description lolcat is a utility similar to the Unix “cat” command. lolcat adds rainbow colors to the text output.

In addition, we need suitable names for the four snaps. We are going to use lolcat-python, lolcat-go, lolcat-rust, and lolcat-c for each one of the four snaps.

All in all, here is the information we collected so far:

  1. Python: name is “lolcat-python“, version is “0.44“, summary: lolcat-python utility written in Python, description: lolcat-python is a utility similar to the Unix “cat” command. lolcat-python adds rainbow colors to the text output.
  2. Go: name is “lolcat-go“, version is “20170226“, summary: lolcat-go utility written in Go, description: lolcat-go is a utility similar to the Unix “cat” command. lolcat-go adds rainbow colors to the text output.
  3. Rush: name is “lolcat-rust“, version is “20170226“, summary: lolcat-rust utility written in Rust, description: lolcat-rust is a utility similar to the Unix “cat” command. lolcat-rust adds rainbow colors to the text output.
  4. C: name is “lolcat-c“, version is “20170226“, summary: lolcat-c utility written in C, description: lolcat-c is a utility similar to the Unix “cat” command. lolcat-c adds rainbow colors to the text output.

The final part in the metadata is the grade (whether it is devel or stable) and the confinement (whether it is devmode, classic or strict).

We select stable as the grade because we aim to add these snaps to the Ubuntu Store. If the grade is specified as devel, then the corresponding snap cannot be added to the stable channel (publicly available to all) of the Ubuntu Store. There is a section below on testing the four lolcat implementations and we will set accordingly the grade to either stable or devel, depending on the outcome of these tests.

We select initially devmode (DEVeloper MODE) as the confinement in order not to confine initially out snap. If our snap fails to run, we want to be sure it is some issue with our settings and not a byproduct of the choice of a stricter confinement. Once the snap can be built and run successfully, we change the confinement to either strict or classic and deal at that point with any issues that appear from there on.

Here is how the metadata look in the snapcraft.yaml configuration file for the Python variant of the snap.

Up to this point, all four snapcraft.yaml files, with metadata filled in, can be found at https://github.com/simos/lolcat-snap/tree/v1.0

Working on the “parts” section in snapcraft.yaml

We completed the easy metadata section, now it is time to work on the parts: section of snapcraft.yaml. (After the parts: section work, we just need the apps: section, where we expose the generated executable to the users of the snap).

Initially, snapcraft init will generate a stock snapcraft.yaml file, which has a stub parts: section. Here it how it looks,

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

It defines the start of the parts: section. Then, a name, my-part: (we choose this name) is defined. Finally, the contents of my-part: are listed, here in two lines. The first line of my-part: is a comment (starts with #) and the second specifies the plugin, which is nil (it is reserved, and does nothing).

Note how these lines are aligned vertically. First is parts: at the start of the line, then two columns further, is my-part:. Then, two more columns further, we get the comment and the plugin: directive, both on the same column. We use spaces and not tabs. If you get any errors later on, check that you follow properly the formatting. You can have the levels separated by more than two columns, if you wish. But make sure that the same level lines are aligned on the same column.

Let’s figure out the initial versions of the parts:. Let’s do Python first!

parts:
  lolcat-python:
    source: https://github.com/tehmaze/lolcat.git
    source-tag: '0.44'
    plugin: python

We used the name lolcat-python: (our choice) for the name of this parts:. In there, we specify the source: for the source code repository, and any branches or tags that may be relevant. As we saw above, we work on the 0.44 tag. Finally, the source is written in Python, and we select the python plugin. (We will figure out later if we need to specify which of Python 2 or Python 3, if we get a relevant error).

Here is the Go,

parts:
  lolcat-go:
    source: https://github.com/cezarsa/glolcat.git
    plugin: go

Fairly expected. We do not specify a version, therefore it will use the latest snapshot of the source at the time of running snapcraft. We chose the name locat-go, and the golang plugin is called go.

Time for Rust,

parts:
  lolcat-rust:
    source: https://github.com/ur0/lolcat.git
    plugin: rust

Again, very similar to the above. We do not specify a specific source code version (there isn’t any tag or stable branch in the repository). We chose the name lolcat-rust, and the Rust plugin in snapcraft is called rust.

Finally, the version written in C,

parts:
  lolcat-c:
    source: https://github.com/jaseg/lolcat.git
    plugin: make

Again, we chose a name for this part, and it’s lolcat-c. Then, specified the URL for the source code. As for the plugin, we wrote make, although the source is written in C. Actually, the plugin: directive specifies how to build the source code. The make plugin does the make; make install sequence for us, and it’s our first choice when we see an existing Makefile file in a repository.

At this stage, we have four initial versions of configuration files. We are ready to try them out and deal with any errors that may appear.

Working on the Python version

Here is the configuration (snapcraft.yaml) for the Python variant up to this point,

name: lolcat-python
version: '0.44'
summary: lolcat utility written in Python
description: |
  lolcat-python is a utility similar to the Unix "cat" command. 
  lolcat-python adds rainbow colors to the text output.
  The source code is available at https://github.com/tehmaze/lolcat

grade: stable
confinement: devmode

parts:
  lolcat-python:
    source: https://github.com/tehmaze/lolcat.git
    source-tag: '0.44'
    plugin: python

Let’s run snapcraft prime. The prime parameter will perform the necessary compilation and will put the result in the prime/ subdirectory. Since snapcraft prime does only the initial compilation steps, it is more convenient to use for repeated tests. Once it works with snapcraft prime, we would be ready in a single extra step to produce the final .snap file (by running snapcraft, no parameters).

$ snapcraft prime
Preparing to pull lolcat-python 
[...]
Pulling lolcat-python 
[...]
Installing collected packages: pip, six, pyparsing, packaging, appdirs, setuptools, wheel
[...]
Successfully downloaded lolcat
Preparing to build lolcat-python 
Building lolcat-python 
[...]
Successfully built lolcat
[...]
Successfully installed lolcat-0.44
Staging lolcat-python 
Priming lolcat-python 
$ _

It worked beautifully! Any result is in the prime/ subdirectory, and we can identify even the executable.

$ ls prime/
bin/  etc/  lib/  meta/  snap/  usr/
$ ls prime/bin
lolcat*
$ _

We are now ready to create the apps: section of the snapcraft.yaml configuration file, in order to expose the executable to the users,

apps:
  lolcat-python:
    command: lolcat

The name: of this snap is lolcat-python, and the name we chose in the apps: section is the same, lolcat-python. Since they are the same, once we install the snap, the newly available command will be called lolcat-python. If there was a difference between the two (for example, name: lolcat-python and apps: lolcat: […], then the command for the users would end up being something like lolcat-python.lolcat, which may not be desirable.

Working on the Go version

Here is the configuration (snapcraft.yaml) for the Go variant up to this point,

name: lolcat-go
version: '20170226'
summary: lolcat utility written in golang
description: |
  lolcat-go is a utility similar to the Unix "cat" command.
  lolcat-go adds rainbow colors to the text output.
  The source code is available at https://github.com/cezarsa/glolcat

grade: devel
confinement: devmode

parts:
  lolcat-go:
    source: https://github.com/cezarsa/glolcat.git
    plugin: go

Let’s run snapcraft prime and see how it goes.

$ snapcraft prime
Preparing to pull lolcat-go 
[...]
Building lolcat-go 
[...]Staging lolcat-go 
Priming lolcat-go 
$ ls prime/bin/
glolcat.git
$ _

Lovely! The go plugin managed to make sense of the repository and compile the Go version of lolcat. The generated executable is called glolcat.git. Therefore, the apps: section looks like

apps:
  lolcat-go:
    command: glolcat.git

In this way, once we install the Go snap of lolcat, there will be an executable lolcat-go exposed to the users. This executable will be the glolcat.git that was generated by the Snapcraft go plugin in prime/bin/. Snapcraft tries to find the final executable in either prime/ or prime/bin/, therefore we do not need to specify specifically bin/glolcat.git.

Working on the Rust version

Here is the configuration (snapcraft.yaml) for the Rust variant up to this point,

name: lolcat-rust
version: '20170226'
summary: lolcat utility written in Rust
description: |
  lolcat-rust is a utility similar to the Unix "cat" command.
  lolcat-rust adds rainbow colors to the text output.
  The source code is available at https://github.com/ur0/lolcat

grade: stable
confinement: devmode

parts:
  lolcat-rust:
    source: https://github.com/ur0/lolcat.git
    plugin: rust

Let’s run snapcraft prime and see how it goes.

$ snapcraft prime
Preparing to pull lolcat-rust 
[...]
Downloading 'rustup.sh'[================] 100%
[...]
Preparing to build lolcat-rust 
Building lolcat-rust 
[...]
Staging lolcat-rust 
Priming lolcat-rust 
$ ls prime/bin/
lolcat
$ _

Without a hitch! The rust plugin noticed that we do not have the Rust compiler, and downloaded it for us! Then, it completed the compilation and the final executable is called lolcat. Therefore, the apps: section looks like

apps:
  lolcat-rust:
    command: lolcat

The apps: section will expose the generated lolcat executable by the name lolcat-rust. Since the name: field is also lolcat-rust, the resulting executable once we install the snap will be lolcat-rust as well. Again, Snapcraft looks by default into both prime/ and prime/bin/ for the command:, therefore it is not required to write specifically command: bin/lolcat.

Working on the C (make) version

Here is the configuration (snapcraft.yaml) for the C (using Makefile) variant up to this point,

name: lolcat-c
version: '20170226'
summary: lolcat utility written in C
description: |
  lolcat-c is a utility similar to the Unix "cat" command.
  lolcat-c adds rainbow colors to the text output.
  The source code is available at https://github.com/jaseg/lolcat

grade: stable
confinement: devmode

parts:
  lolcat-c:
    source: https://github.com/jaseg/lolcat.git
    plugin: make

Let’s run snapcraft prime and see how it goes.

$ snapcraft prime
Preparing to pull lolcat-c 
Pulling lolcat-c 
[...]
Submodule 'memorymapping' (https://github.com/NimbusKit/memorymapping) registered for path 'memorymapping'
Submodule 'musl' (git://git.musl-libc.org/musl) registered for path 'musl'
Cloning into 'memorymapping'...
[...]
Cloning into 'musl'...
[...]
Building lolcat-c 
make -j4
"Using musl at musl"
cd musl; ./configure
[...]sh tools/musl-gcc.specs.sh "/usr/local/musl/include" "/usr/local/musl/lib" "/lib/ld-musl-x86_64.so.1" > lib/musl-gcc.specs
printf '#!/bin/sh\nexec "${REALGCC:-gcc}" "$@" -specs "%s/musl-gcc.specs"\n' "/usr/local/musl/lib" > tools/musl-gcc
chmod +x tools/musl-gcc
make[1]: Leaving directory '/home/myusername/SNAPCRAFT/lolcat-snap/c/parts/lolcat-c/build/musl'
Command '['/bin/sh', '/tmp/tmpetv6irll', 'make', '-j4']' returned non-zero exit status 2
Exit 1
$ _

Well, well. Finally an error!

So, the C version of lolcat was written using the musl libc library. This library was referenced as a git submodule, and snapcraft followed the instructions of the Makefile to pull and compile musl!

Then, for some reason, snapcraft fails to compile lolcat using a musl-gcc wrapper for this newly compiled musl libc. As if we need to specify that the make ; make install sequence should instead be make lolcat; make install. Indeed, looking at the instructions at https://github.com/jaseg/lolcat, this appears to be the case.

Therefore, how do we tell snapcraft that for the make plugin, it should run make lolcat instead of the standard make?

By reading the documentation of the Snapcraft make plugin, we find that

- make-parameters:
 (list of strings)
 Pass the given parameters to the make command.

Therefore, the parts: should look like

parts:
  lolcat-c:
    source: https://github.com/jaseg/lolcat.git
    plugin: make
    make-parameters: [lolcat]

We enclose lolcat in brackets ([lolcat]) because make-parameters accepts a list of strings. If it accepted just a string, we would not need those brackets.

Let’s snap prime again!

$ snapcraft prime
Preparing to pull lolcat-c 
[...]
Building lolcat-c 
make lolcat -j4
"Using musl at musl"
[...]
chmod +x tools/musl-gcc
[...]
gcc -c -std=c11 -Wall -g -Imusl/include -o lolcat.o lolcat.c
[...]
Staging lolcat-c 
Priming lolcat-c 
$ ls prime/bin
ls: cannot access 'prime/bin': No such file or directory
Exit 2
$ ls prime/
censor*  command-lolcat-c.wrapper*  lolcat*  meta/  snap/
$ _

Success! The lolcat executable has been generated, and in fact it is located in prime/ (not in prime/bin/ as in all three previous cases). Also, it took 1 minute and 40 seconds to produce the executable on my computer!

Hmm, there is also a censor executable. I wonder, what does it do?

$ ls -l | ./censor 
█▄█▄█ ██
█▄▄▄▄█▄▄█▄ █ ▄▄▄▄ ▄▄▄▄ █████ ███  ██ ██:██ ▄▄▄▄▄▄
█▄▄▄▄█▄▄█▄ █ ▄▄▄▄ ▄▄▄▄   ███ ███  ██ ██:██ ▄▄▄▄▄▄███▄█▄▄██▄.▄▄▄▄▄▄▄
█▄▄▄▄█▄▄█▄ █ ▄▄▄▄ ▄▄▄▄ █████ ███  ██ ██:██ █▄█▄▄█
█▄▄▄▄▄▄▄█▄ █ ▄▄▄▄ ▄▄▄▄  ████ ███  ██ ██:██ ▄▄█▄
█▄▄▄▄▄▄▄█▄ █ ▄▄▄▄ ▄▄▄▄  ████ ███  ██ ██:██ ▄▄▄▄
$ _

This censor executable is a filter, just like lolcat, and what it does is that it replaces any input letters with block characters. Like censoring the text. Let’s expose both lolcat and censor in this snap! Here is the apps: section,

apps:
  lolcat-c:
    command: lolcat
  censor:
    command: censor

For lolcat, since both the name: and the apps: command are named lolcat-c, the final executable will be lolcat-c. For censor, the executable will be named lolcat-c.censor (composed from the name: and the name we put in apps:).

Which confinement, strict or classic?

When we are about to publish a snap to the Ubuntu Store, we need to decide which confinement to apply, strict or classic.

The strict confinement will by default fully restrict the snap and it is up to us to specify what is allowed. For example, there will be no network connectivity, and we would need to explicitly specify it if we want to provide network connectivity.

The strict confinement can be tricky if the snap needs to read files in general. If the snap were to read file only from the user’s home directory, then that’s easy because we can specify plugs: [home] and be done with it.

In our case, lolcat can be used in two ways,

Note the different colors; the lolcat implementations start every time with a random initial rainbow color

In the first command, lolcat-go requires read access for the file /etc/lsb-release. This would be tricky to specify with the strict confinement, and we would need to use the classic confinement instead.

In the second command, lolcat-go is merely used as a filter. The executable is self-contained and does not read any files whatsoever. Because the cat command does the reading for lolcat.

Therefore, we can select here the strict confinement as long as we promise to use lolcat as a filter only. That is, lolcat would run fully confined! And it will fail if we ask it to read a file like in the first command.

Final configuration files

Here are the four final configuration files! I highlight below what really changes between them (bold+italics).

$ cat python/snap/snapcraft.yaml 
name: lolcat-python
version: '0.44'
summary: lolcat utility written in Python
description: |
  lolcat-python is a utility similar to the Unix "cat" command. 
  lolcat-python adds rainbow colors to the text output.
  The source code is available at https://github.com/tehmaze/lolcat

grade: stable
confinement: strict

apps:
  lolcat-python:
    command: lolcat

parts:
  lolcat-python:
    source: https://github.com/tehmaze/lolcat.git
    source-tag: '0.44'
    plugin: python

$ cat go/snap/snapcraft.yaml 
name: lolcat-go
version: '20170226'
summary: lolcat utility written in golang
description: |
  lolcat-go is a utility similar to the Unix "cat" command.
  lolcat-go adds rainbow colors to the text output.
  The source code is available at https://github.com/cezarsa/glolcat

grade: devel
confinement: strict

apps:
  lolcat-go:
    command: glolcat.git

parts:
  lolcat-go:
    source: https://github.com/cezarsa/glolcat.git
    plugin: go

$ cat rust/snap/snapcraft.yaml 
name: lolcat-rust
version: '20170226'
summary: lolcat utility written in Rust
description: |
  lolcat-rust is a utility similar to the Unix "cat" command.
  lolcat-rust adds rainbow colors to the text output.
  The source code is available at https://github.com/ur0/lolcat

grade: stable
confinement: strict

apps:
  lolcat-rust:
    command: lolcat

parts:
  lolcat-rust:
    source: https://github.com/ur0/lolcat.git
    plugin: rust

$ cat c/snap/snapcraft.yaml 
name: lolcat-c
version: '20170226'
summary: lolcat utility written in C
description: |
  lolcat-c is a utility similar to the Unix "cat" command.
  lolcat-c adds rainbow colors to the text output.
  The source code is available at https://github.com/jaseg/lolcat

grade: stable
confinement: strict

apps:
  lolcat-c:
    command: lolcat
  censor:
    command: censor

parts:
  lolcat-c:
    source: https://github.com/jaseg/lolcat.git
    plugin: make
    make-parameters: [lolcat]
$ _

Quality testing

We have four lolcat snaps available to upload to the Ubuntu Store. Let’s test them first to see which ones are actually good enough to make it today to the Ubuntu Store!

A potential problem with filters like lolcat, is that they may not know how to deal with some UTF-8-encoded strings. In the UTF-8 encoding, text in English like what you read now, is encoded just like ASCII. Problems may occur when you have text in other scripts, where each Unicode character would occupy more than one byte. For example, each character in “Γεια σου κόσμε” is encoded as two bytes in the UTF-8 encoding. Furthermore, there are characters that are stored in Plane 1 of Unicode (Emojis!) which require the maximum of four bytes to encode in UTF-8. Here is an example of emoji characters, “🐧🐨🐩🐷🐸🐹🐺🐼🏴👏” (if they appear as empty boxes, you do not have the proper fonts installed.

What we are seeing here, is that the current implementations of lolcat for Python and Go do not work with non-ASCII encodings. At the moment, they can only colorize text in English.

On the other hand, the Rust and C implementations work fine on all Unicode characters!

There is another issue to watch for. Specifically,

The Unix ls command takes care to understand when you are redirecting the output, and if you do so, by default it does not colorize the output. This is a very useful default for ls because in most cases you want clean text files with no escape sequences for the color. To further our testing, let’s see how well the remaining two lolcat implementations deal with content that happen to already have escape sequences (for example, for color),

We generated a text file (cat-color.txt) with color escape sequences. The Rust implementation choked on them, while the C implementation worked fine!

We have a winner! The C implementation of lolcat will be published in the Ubuntu Store, in the stable channel (available to all). The Rust implementation will be published in the candidate channel (pending a fix when dealing with text that already has escape sequences), while the Python and Go implementations will be published in the edge channels. (Note: if the authors of these lolcat implementations are reading this, my aim here is to demonstrate the different channels. I plan to write a follow-up post on how to fix these issues in the source code and how to re-release all snaps to the stable channel.)

Let’s publish to the Ubuntu Store

The documentation page Publish your snap from snapcraft.io explains the details on publishing a snap on the Ubuntu Store.

In summary, when releasing a snap to the Ubuntu Store, you specify in which channel(s) you want it to be in:

  1. stable channel, for publicly available snaps. These can be searched when running snap find, and will be installed when you run snap install.
  2. candidate channel, for snaps that are very soon to be released to the stable channel. For this and the following channels, you need to specify explicitly the channel when you try to install. Otherwise, you will get the error that the snap is not found.
  3. beta channel, for beta versions of snaps.
  4. edge channel, for testing versions of snaps. This is the furthest away from stable.

If you release a snap in the stable channel, it implies that it will be shown in those below. Or, if you release a snap in the beta channel, it will appear as well in the channel below (edge).

Furthermore, when we were filling in the metadata of the snapcraft.yaml configuration files, there was a field called grade, with options for either stable or devel. If the grade has been specified as devel, then the snap can only be installed in either the beta or edge channels.

So, this is what we are going to do:

  1. The C implementation, lolcat-c, will have grade: stable, and get published in the stable channel of the Ubuntu Store.
  2. The Rust implementation, lolcat-rust, will have grade: devel, and get published in the beta channel of the Ubuntu Store.
  3. The Go and Python implementations, lolcat-go and lolcat-python, will have the grade: devel and get published in the edge channel of the Ubuntu Store.

We have already ran snapcraft login, and logged in with our Ubuntu Single SignOn (SSO) account.

We successfully registered the names on the Ubuntu Store, and then we pushed the snap files. We note down the revision, which is Revision 1 for each one of them.

The snapcraft release command looks like

  snapcraft [options] release <snap-name> <revision> <channel>

The snapcraft release command instructs the Ubuntu Store to release the recently pushed .snap file into the specified channels.

How to install them

Here we install lolcat-c.

First we perform a search for lolcat. The snap lolcat-c is found. There is a feature in the snap find command, that if we were to search for lolcat-c, it would show results for both lolcat and c (shows irrelevant results).

We then run the snap info command to show the Ubuntu Store info for the snap lolcat-c.

Then, we install lolcat-c, by running snap install lolcat-c.

Finally, we run snap info again, and we can see more information since the snap has just been installed. In particular, we can see that there is the additional command lolcat-c.censor. Oh, and then snap is just 24kB in size.

Let’s install the Rust implementation!

The Rust implementation of lolcat can be found in the beta and edge channels. In order to install it, we need to specify the correct channel (here, either beta or edge). And that’s it!

The Go implementation of lolcat is in the edge channel. We install and test it successfully.

Finally, the Python implementation of lolcat, installed from the edge channel.

Hope you enjoyed the post!

post image

Summary of @DellCarePRO Ubuntu Basics Webinar (Feb 2017)

Last week there was a webinar from @DellCarePRO titled Ubuntu Basic Webinar.

Today the webinar video Ubuntu Basics Webinar has been posted online, and here is the summary.

Introduction

Ubuntu Certified hardware page

 

If your Dell laptop comes with Ubuntu, you can get the installation ISO (Recovery Image) from dell.com.

Ubuntu installation as dual-boot

Installing Ubuntu.

Installing Ubuntu.

 

Ubuntu installed.

Ubuntu installed.

Explaining: The Menu Bar

 

Explaining: Dash

 

Explaining: Ubuntu Software Center

 

Explaining: Keyboard shortcuts

 

Explaining: Software and Updates

 

Explaining: Multiple Monitor configuration

Talk by Barton George

Presenting Barton George and Project Sputnik. Barton George headed an internal effort in Dell to get Ubuntu on a high-end laptop, with a budget of just $40,000 and six months to deliver.

 

Funding came from the Dell Innovation Fund, with the aim to establish if an Ubuntu laptop would work.

 

Contrary to other efforts, this one was for a high-end offering. It would involve the community and get feedback from the community in order to change perceptions.

 

Very well-received. arstechnica, o’reilly radar, techcrunch, The Wall Street Journal.

 

Positive feedback from the twitter-sphere.

 

Expansion from the initial XPS 13 with Ubuntu, to a new 6th gen Intel laptop along with a whole line of Latitude Ubuntu laptops. And an All-in-One Ubuntu desktop.

There was emphasis that the initial fund of $40,000 to investigate whether an Ubuntu laptop would be a viable product, delivered multiple times the profits to Dell.

post image

How to create a snap with snapcraft for howdoi (CLI utilility for stackoverflow)

In the tutorial How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04 we saw how to create a snap with snapcraft for the CLI utility called how2. That was a software based on nodejs.

In this post we will repeat the process for another CLI utility called howdoi by Benjamin Gleitzman, which does a similar task with how2 but is implemented in Python and has a few usability differences as well. howdoi does not have yet a package in the repositories of Ubuntu, either.

Since we covered already the details in How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04, this post would be more focused, and shorter. 🙂

Planning

Reading through https://github.com/gleitz/howdoi we see that howdoi

  1. is software based on Python (therefore: plugin: python)
  2. requires networking (therefore: plugs: [network])
  3. and has no need to save files (therefore it does not need access to the filesystem)

Crafting with snapcraft

Let’s start with snapcraft.

$ mkdir howdoi
$ cd howdoi/
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started

Now we edit snap/snapcraft.yaml and here are our changes (in bold) from the initial generated file.

$ cat snap/snapcraft.yaml 
name: howdoi # you probably want to 'snapcraft register <name>'
version: '20170207' # just for humans, typically '1.2+git' or '1.3.2'
summary: instant coding answers via the command line # 79 char long summary
description: |
  Are you a hack programmer? Do you find yourself constantly Googling 
  for how to do basic programing tasks?
  Suppose you want to know how to format a date in bash. Why open your browser 
  and read through blogs (risking major distraction) when you can simply 
  stay in the console and ask howdoi.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  howdoi:
    command: howdoi
    plugs: [network]

parts:
  howdoi:
    plugin: python
    source: https://github.com/gleitz/howdoi.git

First, we selected to use the name howdoi because again, it’s not a reserved name :-). Also, we registered it with snapcraft,

$ snapcraft register howdoi
Registering howdoi.
Congratulations! You're now the publisher for 'howdoi'.

Second, we did not notice a particular branch or tag for howdoi, therefore we put the date of the snap creation.

Third, the summary and the description are just pasted from the Readme.md of the howdoi repository.

Fourth, we select the grade stable and enforce the strict confinement.

The apps: howdoi: command: howdoi is the standard sequence to specify the command that will be exposed to the user. The user will be typing howdoi and the command howdoi inside the snap will be invoked.

The parts: howdoi: plugin: python source: … is the standard sequence to specify that the howdoi that was referenced just earlier, is software written in Python and the source comes from this github repository.

Let’s craft the snap.

$ snapcraft 
Preparing to pull howdoi 
...                                                                     
Pulling howdoi 
...
Preparing to build howdoi 
Building howdoi 
...
Successfully built howdoi
...
Installing collected packages: howdoi, cssselect, Pygments, requests, lxml, pyquery, requests-cache
Successfully installed Pygments-2.2.0 cssselect-1.0.1 howdoi-1.1.9 lxml-3.7.2 pyquery-1.2.17 requests-2.13.0 requests-cache-0.4.13
Staging howdoi 
Priming howdoi 
Snapping 'howdoi' |                                                                       
Snapped howdoi_20170207_amd64.snap
$ snap install howdoi_20170207_amd64.snap --dangerous
howdoi 20170207 installed
$ howdoi format date bash
DATE=`date +%Y-%m-%d`
$ _

Beautiful! It worked!

Publish to the Ubuntu Store

Let’s publish the snap to the Ubuntu Store. We are going to push the file howdoi_20170207_amd64.snap and then check that it has passed the automatic checking. Once it has done so, we release to the stable channel.

$ snapcraft push howdoi_20170207_amd64.snap 
Pushing 'howdoi_20170207_amd64.snap' to the store.
Uploading howdoi_20170207_amd64.snap [=============================================================] 100%
Ready to release!|                                                                                       
Revision 1 of 'howdoi' created.

Just a reminder: We can release the snap to the stable channel simply by running snapcraft release howdoi 1 stable. The alternative to this command, is to do all the following through the Web.

We log in into https://myapps.developer.ubuntu.com/ to check whether snap is ready to publish. In the following screenshots, you would click where the arrows are showing. See the captions for explanations.

Here is the uploaded snap in our account page in the Ubuntu Store. The snap was uploaded using snapcraft, although it is also possible to uploaded from the account page as well.

 

The package (the snap) is ready to publish, because it passed the automated tests and was not flagged for manual review.

By default, the package has not been released to a channel. We click on Release in order to select which channels to release it to.

For this specific package, we select the stable channel. It is not necessary to select the other channels, because by default a higher channel implies those below. Then, click on the Release button.

The package got released, and it shown it got released in stable, candidate, beta and edge (we selected stable, but the rest are implied because “stable” beats the rest.) Note that the Package status has changed to “Published”, and we have the option to Unpublish or even Make private. Ignore the arrow, it was pasted by mistake.

post image

How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04

Stackoverflow is an invaluable resource for questions related to programming and other subjects.

Normally, the workflow for searching http://stackoverflow.com/, is to search Google using a Web browser. Most probably, the result will be a question from stackoverflow.

A more convenient way to query StackOverflow, is to use the how2 command-line utility.

Here is how it looks:

In this HowTo, we will see:

  1. How to set up snapcraft in order to make the snap
  2. How to write the initial snapcraft.yaml configure
  3. Build the snap with trial and error
  4. Create the final snap
  5. Make the snap available to the Ubuntu Store

Set up snapcraft

snapcraft is a utility that helps us create snaps. Let’s install snapcraft.

$ sudo apt update
...
Reading state information... Done
All packages are up to date.
$ sudo apt install snapcraft
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  snapcraft
...
Preparing to unpack .../snapcraft_2.26_all.deb ...
Unpacking snapcraft (2.26) ...
Setting up snapcraft (2.26) ...
$_

In Ubuntu 16.04, snapcraft was updated in early February and has a few differences from the previous version. Make sure you have snapcraft 2.26 or newer.

Let’s create a new directory for the development of the httpstat snap and initialize it with snapcraft so that create the necessary initial files.

$ mkdir how2
$ cd how2/
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
$ ls -l
total 4
drwxrwxr-x 2 myusername myusername 4096 Feb   6 14:09 snap
$ ls -l snap/
total 4
-rw-rw-r-- 1 myusername myusername 676 Feb   6 14:09 snapcraft.yaml
$ _

We are in this how2/ directory and from here we run snapcraft in order to create the snap. snapcraft will take the instructions from snap/snapcraft.yaml and do its best to create the snap.

These are the initial contents of snap/snapcraft.yaml:

name: my-snap-name # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: |
  This is my-snap's description. You have a paragraph or two to tell the
  most important story about your snap. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the snap
  store.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

I have formatted as italics the first chunk of configuration lines of snapcraft.yaml, because this chunk is what rarely changes when you develop the snap. The other chunk is the one that the actual actions take place. It is good to distinguish those two chunks.

This snap/snapcraft.yaml configuration file is actually usable and can create an (empty) snap. Let’s create this empty snap, install it, uninstall it and then clean up to the initial pristine state.

$ snapcraft 
Preparing to pull my-part 
Pulling my-part 
Preparing to build my-part 
Building my-part 
Staging my-part 
Priming my-part 
Snapping 'my-snap-name' |                                                                 
Snapped my-snap-name_0.1_amd64.snap
$ snap install my-snap-name_0.1_amd64.snap 
error: cannot find signatures with metadata for snap "my-snap-name_0.1_amd64.snap"
$ snap install my-snap-name_0.1_amd64.snap --dangerous
error: cannot perform the following tasks:
- Mount snap "my-snap-name" (unset) (snap "my-snap-name" requires devmode or confinement override)
Exit 1
$ snap install my-snap-name_0.1_amd64.snap --dangerous --devmode
my-snap-name 0.1 installed
$ snap remove my-snap-name
my-snap-name removed
$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ ls
my-snap-name_0.1_amd64.snap  snap/
$ rm my-snap-name_0.1_amd64.snap 
rm: remove regular file 'my-snap-name_0.1_amd64.snap'? y
removed 'my-snap-name_0.1_amd64.snap'
$ _

While developing the snap, we will be going through this cycle of creating the snap, testing it and then removing it. There are ways to optimize a bit this process, learn soon we will.

In order to install the snap from a .snap file, we had to use –dangerous because the snap has not been digitally signed. We also had to use –devmode because snapcraft.yaml specifies the developer mode, which is a relaxed (in terms of permissions) development mode.

Writing the snapcraft.yaml for how2

Here is the first chunk of snapcraft.yaml, the chunk that does not change while developing the snap.

name: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

We specify the name and version of the snap. The name is not registered already and it is not reserved, because

$ snapcraft register how2
Registering how2.
Congratulations! You're now the publisher for 'how2'.

We add a suitable summary and description that was copied conveniently from the development page of how2.

We set the grade to stable so that the snap can make it to the stable channel and be available to anyone.

We set the confinement to strict, which means that by default the snap will have no special access (no filesystem access, no network access, etc) unless we carefully allow what is really needed.

Here goes the other chunk.

apps:
  how2:
    command: how2

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

How did we write this other chunk?

The apps: how2 : command: how2 is generic. That is, we specify an app that we name as how2, and it is invoked as a command with the name how2. The command could also be bin/how2 or node how2. We will figure out later whether we need to change it because snapcraft will show an error message.

The parts: how2: plugin: nodejs is also generic. We know that how2 is build on nodejs and we figured that one out from the github page of how2. Then, we looked into the list of plugins for snapcraft, and found the nodejs plugin page. At the end of the nodejs plugin page there is a link to examples for the user of nodejs in snapcraft.yaml. This link is actually a search in github with search terms filename:snapcraft.yaml “plugin: nodejs”(in all files that are named snapcraft.yaml, search for “plugin: nodejs”). For this search to work, you need to be logged in to Github first. For the specific case of nodejs, we can try without additional parameters as most examples do not show a use of special parameters.

Work on the snapcraft.yaml with trial and error

We come up with the following snapcraft.yaml by piecing together the chunks from the previous section:

$ cat snap/snapcraft.yamlname: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  how2:
    command: how2
    plugs:
      - network

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

Let’s run snapcraft in order to build the snap.

$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ snapcraft 
Preparing to pull how2 
Pulling how2 
...
Downloading 'node-v4.4.4-linux-x64.tar.gz'[===============================] 100%
npm --cache-min=Infinity install
...
npm-latest@1.0.2 node_modules/npm-latest
├── vcsurl@0.1.1
├── colors@0.6.2
└── registry-url@3.1.0 (rc@1.1.6)
...
Preparing to build how2 
Building how2 
...
Staging how2 
Priming how2 
Snapping 'how2' |                                                                              
Snapped how2_20170206_amd64.snap
$ _

Wow, it created successfully the snap on the first try! Let’s install it and then test it.

$ sudo snap install how2_20170206_amd64.snap --dangerous
how2 20170206 installed
$ how2 read file while changing
/Cannot connect to Google.
Error: Error on response:Error: getaddrinfo EAI_AGAIN www.google.com:443 : undefined
$ _

It works again, and the only problem is the confinement. We need to allow the snap to access the Internet, and only the Internet.

Add the ability to access the Internet

To be able to access the network, we need to relax the confinement of the snap and allow access to the network interface.

There is an identifier called plugs, and accepts an array of names of interfaces, from the list of available interfaces.

In snapcraft.yaml, you can specify such an array in either of the following formats:

plugs: [ network]
         or
plugs: 
   - network

Here is the final version of snapcraft.yaml for how2:

name: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  how2:
    command: how2
    plugs: [ network ]

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

Let’s create the snap, install and run the test query.

$ snapcraft 
Skipping pull how2 (already ran)
Skipping build how2 (already ran)
Skipping stage how2 (already ran)
Skipping prime how2 (already ran)
Snapping 'how2' |                                                                              
Snapped how2_20170206_amd64.snap
$ sudo snap install how2_20170206_amd64.snap --dangerous
how2 20170206 installed
$ how2 read file while changing
terminal - Output file contents while they change

You can use tail command with -f  :


   tail -f /var/log/syslog 

It's good solution for real time  show.


Press SPACE for more choices, any other key to quit.

That’s it! It works fine!

Make the snap available in the Ubuntu Store

The command snapcraft push will upload the .snap file to the Ubuntu Store. Then, we use the snapcraft release command to release the snap into the beta channel of the Ubuntu Store. Because we specified the grade as devel, we cannot release to the stable channel. When we release a snap to the beta channel, it is considered as released to the edge channel as well (because beta is higher than edge).

$ snapcraft push how2_20170206_amd64.snap 
Pushing 'how2_20170206_amd64.snap' to the store.
Uploading how2_20170206_amd64.snap [====================================================================] 100%
Ready to release!|                                                                                            
Revision 1 of 'how2' created.
$ snapcraft release how2 1 stable
Revision 1 (strict) cannot target a stable channel (stable, grade: devel)
$ snapcraft release how2 1 beta
The 'beta' channel is now open.

Channel    Version    Revision
stable     -          -
candidate  -          -
beta       20170206   1
edge       ^          ^
$ _

Everything looks fine now. Let’s remove the manually-installed snap and install it from the Ubuntu Store.

$ snap remove how2
how2 removed
$ snap info how2
name:      how2
summary:   "how2, stackoverflow from the terminal"
publisher: simosx
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.
  
channels:              
  beta:   20170206 (1) 11MB -
  edge:   20170206 (1) 11MB -

$ snap install how2
error: cannot install "how2": snap not found
$ snap install how2 --channel=beta
how2 (beta) 20170206 from 'simosx' installed
$ how2 how to edit an XML file
How to change values in XML file

Using XMLStarlet (http://xmlstar.sourceforge.net/):
...omitted...