Jul 23 2016

How to set up multiple secure (SSL/TLS, Qualys SSL Labs A+) websites using LXD containers

In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.

In this post, we are going to

  1. Create multiple websites, each in a separate LXD container
  2. Install HAProxy as a TLS Termination Proxy, in an LXD container
  3. Configure HAProxy so that each website is only accessible through TLS
  4. Perform the SSL Server Test so that our websites really get the A+!

In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.

The requirements are

Set up a VPS

We are using DigitalOcean in this example.

do-create-droplet-16041

Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.

We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!

In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.

Let’s click on the Create droplet button and the VPS is created!

Initial configuration

We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.

Trying out LXD containers on Ubuntu on DigitalOcean

Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».

Creating the containers

We create three containers for three websites, plus one container for HAProxy.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1
Creating web1
Retrieving image: 100%
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2
Creating web2

real    0m6.620s
user    0m0.016s
sys    0m0.004s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3
Creating web3

real    1m15.723s
user    0m0.012s
sys    0m0.020s
ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy
Creating haproxy

real    0m48.747s
user    0m0.012s
sys    0m0.012s
ubuntu@ubuntu-512mb-ams3-01:~$

Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.

Let’s start the containers up!

ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy
ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$

You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.

DNS configuration

The public IP address of this specific VPS is 188.166.10.229. For this test, I am using the domain ubuntugreece.xyz as follows:

  1. Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP 188.166.10.229
  2. Container web2: web2.ubuntugreece.xyz has IP 188.166.10.229
  3. Container web3: web3.ubuntugreece.xyz has IP 188.166.10.229

Here is how it looks when configured on a DNS management console,

namecheap-configuration-containers

From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.

ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz
ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz
Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN)
ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz
web3.ubuntugreece.xyz has address 188.166.10.229
ubuntu@ubuntu-512mb-ams3-01:~$

These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.

We can continue! (and ignore for now web2)

Web server configuration

Let’s see the configuration for web1. You must repeat the following for web2 and web3.

We install the nginx web server,

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done

Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
root@web1:~#

nginx needs to be configured so that it understands the domain name for web1. Here is the diff,

diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default
index a761605..b2cea8f 100644
--- a/etc/nginx/sites-available/default
+++ b/etc/nginx/sites-available/default
@@ -38,7 +38,7 @@ server {
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
 
-       server_name _;
+       server_name ubuntugreece.xyz www.ubuntugreece.xyz;
 
        location / {
                # First attempt to serve request as file, then

and finally we restart nginx and exit the web1 container,

root@web1:/etc/nginx/sites-enabled# systemctl restart nginx
root@web1:/etc/nginx/sites-enabled# exit
exit
ubuntu@ubuntu-512mb-ams3-01:~$

Forwarding connections to the HAProxy container

We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.

ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 04:01:36:50:00:01  
          inet addr:188.166.10.229  Bcast:188.166.63.255  Mask:255.255.192.0
          inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40513 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360767509 (360.7 MB)  TX bytes:3863846 (3.8 MB)

ubuntu@ubuntu-512mb-ams3-01:~$ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| haproxy | RUNNING | 10.234.150.39 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web1    | RUNNING | 10.234.150.169 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web2    | RUNNING | 10.234.150.119 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
| web3    | RUNNING | 10.234.150.51 (eth0)  |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 80 -j DNAT --to-destination 10.234.150.39:80
[sudo] password for ubuntu: 
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 443 -j DNAT --to-destination 10.234.150.39:443
ubuntu@ubuntu-512mb-ams3-01:~$

If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).

HAProxy initial configuration

Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash
root@haproxy:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
...
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@haproxy:~# apt upgrade
Reading package lists... Done
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
root@haproxy:~# apt install haproxy
Reading package lists... Done
...
Processing triggers for ureadahead (0.100.0-19) ...
root@haproxy:~#

We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.

diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg
index 86da67d..f6f2577 100644
--- a/etc/haproxy/haproxy.cfg
+++ b/etc/haproxy/haproxy.cfg
@@ -18,11 +18,17 @@ global
     ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
     ssl-default-bind-options no-sslv3
 
+        # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024.
+        # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param
+        tune.ssl.default-dh-param 2048
+
 defaults
     log    global
     mode    http
     option    httplog
     option    dontlognull
+        option  forwardfor
+        option  http-server-close
         timeout connect 5000
         timeout client  50000
         timeout server  50000
@@ -33,3 +39,56 @@ defaults
     errorfile 502 /etc/haproxy/errors/502.http
     errorfile 503 /etc/haproxy/errors/503.http
     errorfile 504 /etc/haproxy/errors/504.http
+
+# Configuration of the frontend (HAProxy as a TLS Termination Proxy)
+frontend www_frontend
+    # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS.
+    bind *:80
+    # We bind on port 443 (https) and specify a directory with the certificates.
+####    bind *:443 ssl crt /etc/haproxy/certs/
+    # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP.
+####    redirect scheme https if !{ ssl_fc }
+    # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx
+    # that there was a TLS Termination Proxy. Required for WordPress and other CMS.
+    reqadd X-Forwarded-Proto:\ https
+
+    # Distinguish between secure and insecure requestsa (used in next two lines)
+    acl secure dst_port eq 443
+
+    # Mark all cookies as secure if sent over SSL
+    rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
+
+    # Add the HSTS header with a 1 year max-age
+    rspadd Strict-Transport-Security:\ max-age=31536000 if secure
+
+    # Configuration for each virtual host (uses Server Name Indication, SNI)
+    acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz
+    acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz
+    acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz
+
+    # Directing the connection to the correct LXD container
+    use_backend web1_cluster if host_ubuntugreece_xyz
+    use_backend web2_cluster if host_web2_ubuntugreece_xyz
+    use_backend web3_cluster if host_web3_ubuntugreece_xyz
+
+# Configuration of the backend (HAProxy as a TLS Termination Proxy)
+backend web1_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web1", directs to container "web1.lxd" (hostname).
+    server web1 web1.lxd:80 check
+
+backend web2_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web2", directs to container "web2.lxd" (hostname).
+    server web2 web2.lxd:80 check
+
+backend web3_cluster
+    balance leastconn
+    # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP.
+    http-request set-header X-Client-IP %[src]
+    # This backend, named here "web3", directs to container "web3.lxd" (hostname).
+    server web3 web3.lxd:80 check

Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.

root@haproxy:~# systemctl restart haproxy
root@haproxy:~# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Does it work? Let’s visit the website,

do-ubuntugreece

It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.

Get certificates from Let’s Encrypt

We exit out to the VPS and install letsencrypt.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt
[sudo] password for ubuntu: 
Reading package lists... Done
...
Setting up python-pyicu (1.9.2-2build1) ...
ubuntu@ubuntu-512mb-ams3-01:~$

We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz
... they ask for a contact e-mail address and whether we accept the Terms of Service...

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to xxxxx@gmail.com.
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert
   will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$

For completeness, here are the command lines for the other two websites,

ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your
   cert will expire on 2016-10-21. To obtain a new version of the
   certificate in the future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le


real    0m18.458s
user    0m0.852s
sys    0m0.172s
ubuntu@ubuntu-512mb-ams3-01:~$

Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!

We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.

ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem'
ubuntu@ubuntu-512mb-ams3-01:~$

HAProxy final configuration

We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash
root@haproxy:~# vi /etc/haproxy/haproxy.cfg 

haproxy-config-ok
root@haproxy:/etc/haproxy# systemctl restart haproxy
root@haproxy:/etc/haproxy# exit
ubuntu@ubuntu-512mb-ams3-01:~$

Let’s test them!

Here are the three websites, notice the padlocks on all three of them,

The SSL Server Report (Qualys)

Here are the SSL Server Reports for each website,

You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.

Results

The disk space requirements for those four containers (three static websites plus haproxy) are

ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list
[sudo] password for ubuntu: 
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool-lxd  14.9G  1.13G  13.7G         -     4%     7%  1.00x  ONLINE  -
ubuntu@ubuntu-512mb-ams3-01:~$

The four containers required a bit over 1GB of disk space.

The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.

ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory"
[  181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child
[  183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child
[  190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child
[  848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child
[  860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child
[  878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child
ubuntu@ubuntu-512mb-ams3-01:~$

There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.

Summary

We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.

We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.

We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.

The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.

 

Permanent link to this article: https://blog.simos.info/how-to-set-up-multiple-secure-ssltls-qualys-ssl-labs-a-websites-using-lxd-containers/

Jul 22 2016

Playing around with LXD containers (LXC) on Ubuntu

We have set up LXD on either our personal computer or on the cloud (like DigitalOcean and Scaleway). Actually, we can even try LXD online for free at https://linuxcontainers.org/lxd/try-it/

What shall we do next?

Commands through “lxc”

Below we see a series of commands that start with lxc, then we add an action and finally we add any parameters. lxc here is the program that does the communication with the LXD service and performs the actions that we request. That is,

lxc action parameters

There are also a series of commands that are specific to a type of object. In that case, we add in the the object type and continue with the action and the parameters.

lxc object action parameters

List the available containers

Let’s use the list action, which lists the available containers.

ubuntu@myvps:~$ lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
ubuntu@myvps:~$
The first time you run lxc list, it creates a client certificate (installs it in ~/.config/lxc/). It takes a few seconds and this process takes place only once.
The command also advices us to run sudo lxd init (note: lxd) if we haven’t done so before. Consult the configuration posts if in doubt here.
In addition, this command also suggests us on how to start (launch) our first container.
Finally, it shows the list of available containers on this computer, which is empty (because we have not created any yet).

List the locally available images for containers

Let’s use the image object, and then the list action, which lists the available (probably cached) images that are hosted by our LXD service.

ubuntu@myvps:~$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
ubuntu@myvps:~$

There are no locally available images yet, so the list is empty.

List the remotely available images for containers

Let’s use the image object, and then the list action, and finally a remote repository specifier (ubuntu:) in order to list some publicly available images that we can use to create containers.
ubuntu@myvps:~$ lxc image list ubuntu:
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+
|       ALIAS        | FINGERPRINT  | PUBLIC |                   DESCRIPTION                   |  ARCH   |   SIZE   |          UPLOAD DATE          |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| p (5 more)         | 6b6fa83dacb0 | yes    | ubuntu 12.04 LTS amd64 (release) (20160627)     | x86_64  | 155.43MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| p/armhf (2 more)   | 06604b173b99 | yes    | ubuntu 12.04 LTS armhf (release) (20160627)     | armv7l  | 135.90MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+    
...
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x (5 more)         | f452cda3bccb | yes    | ubuntu 16.04 LTS amd64 (release) (20160627)     | x86_64  | 138.23MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x/arm64 (2 more)   | 46b365e258a0 | yes    | ubuntu 16.04 LTS arm64 (release) (20160627)     | aarch64 | 146.72MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
| x/armhf (2 more)   | 22f668affe3d | yes    | ubuntu 16.04 LTS armhf (release) (20160627)     | armv7l  | 148.18MB | Jun 27, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
...
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
|                    | 4c6f7b94e46a | yes    | ubuntu 16.04 LTS s390x (release) (20160516.1)   | s390x   | 131.07MB | May 16, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
|                    | ddfa8f2d4cfb | yes    | ubuntu 16.04 LTS s390x (release) (20160610)     | s390x   | 131.41MB | Jun 10, 2016 at 12:00am (UTC) |        
+--------------------+--------------+--------+-------------------------------------------------+---------+----------+-------------------------------+        
ubuntu@myvps:~$
The repository ubuntu: is a curated list of containers from Canonical, and has all sorts of Ubuntu versions (from 12.04 or newer) and architectures (like x86_64, ARM and even S390x).
The first column is the nickname or alias. Ubuntu 16.04 LTS for x86_64 has the alias x, so we can use that or we can specify the fingerprint (here: f452cda3bccb).

Show information for a remotely available image for containers

Let’s use the image object, and then the list action, and finally a remote image specifier (ubuntu:x) in order to get info out of a specific publicly available image that we can use to create containers.
ubuntu@myvps:~$ lxc image info ubuntu:x
    Uploaded: 2016/06/27 00:00 UTC                                                                                                                           
    Expires: 2021/04/21 00:00 UTC                                                                                                                            

Properties:                                                                                                                                                  
    aliases: 16.04,x,xenial                                                                                                                                  
    os: ubuntu                                                                                                                                               
    release: xenial                                                                                                                                          
    version: 16.04                                                                                                                                           
    architecture: amd64                                                                                                                                      
    label: release                                                                                                                                           
    serial: 20160627                                                                                                                                         
    description: ubuntu 16.04 LTS amd64 (release) (20160627)                                                                                                 

Aliases:                                                                                                                                                     
    - 16.04                                                                                                                                                  
    - 16.04/amd64                                                                                                                                            
    - x                                                                                                                                                      
    - x/amd64                                                                                                                                                
    - xenial                                                                                                                                                 
    - xenial/amd64                                                                                                                                           

Auto update: disabled           
ubuntu@myvps:~$

Here we can see the full list of aliases for the 16.04 image (x86_64). The simplest of all, is x.

Life cycle of a container

Here is the life cycle of a container. First you initialize the image, thus creating the (stopped) container. Then you can start and stop it. Finally, in the stopped state, you may select to delete it.

LifecycleLXD

 We initialise a container with Ubuntu 16.04 (ubuntu:x) and give the name mycontainer. Since we do not have yet any locally cached images, this one is downloaded and cached for us. If we need another container with Ubuntu 16.04, it will be prepared instantly since it is already cached localy.
When we initialise a container from an image, it gets the STOPPED state. When we start it, it gets into the RUNNING state.
When we start a container, the runtime (or rootfs) is booted up and may take a few seconds until the network is up and running. Below we can see that it took a few seconds until the container managed to get the IPv4 IP address through DHCP from LXD.
We can install web servers and other services into the container. Here, we just execute a BASH shell in order to get shell access inside the container and run the uname command.
We promptly exit from the container and stop it.
Then, we delete the container and verify that it has been delete (it is not shown in lxc list).
Finally, we also verify that the image is still cached locally on LXD, waiting for the next creation of a container.
Here are the commands,
ubuntu@myvps:~$ lxc init ubuntu:x mycontainer
Creating mycontainer                                                                                                                                         
Retrieving image: 100%                                                                                                                                       
ubuntu@myvps:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+
ubuntu@myvps:~$ lxc list
+-------------+---------+------+------+------------+-----------+                                                                                             
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
ubuntu@myvps:~$ lxc start mycontainer
ubuntu@myvps:~$ lxc list     
+-------------+---------+------+-----------------------------------------------+------------+-----------+                                                    
|    NAME     |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |                                                    
+-------------+---------+------+-----------------------------------------------+------------+-----------+                                                    
| mycontainer | RUNNING |      | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                                    
+-------------+---------+------+-----------------------------------------------+------------+-----------+
ubuntu@myvps:~$ lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
| mycontainer | RUNNING | 10.200.214.147 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
ubuntu@myvps:~$ lxc exec mycontainer -- /bin/bash       
root@mycontainer:~# uname -a
Linux mycontainer 4.4.0-31-generic #50~14.04.1-Ubuntu SMP Wed Jul 13 01:07:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux                                        
root@mycontainer:~# exit
exit
ubuntu@myvps:~$ lxc list
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
|    NAME     |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
| mycontainer | RUNNING | 10.200.214.147 (eth0) | 2607:f2c0:f00f:2770:216:3eff:fe4a:ccfd (eth0) | PERSISTENT | 0         |                                   
+-------------+---------+-----------------------+-----------------------------------------------+------------+-----------+                                   
ubuntu@myvps:~$ lxc stop mycontainer
ubuntu@myvps:~$ lxc list
+-------------+---------+------+------+------------+-----------+                                                                                             
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |                                                                                             
+-------------+---------+------+------+------------+-----------+                                                                                             
| mycontainer | STOPPED |      |      | PERSISTENT | 0         |                                                                                             
+-------------+---------+------+------+------------+-----------+       
ubuntu@myvps:~$ lxc delete mycontainer
ubuntu@myvps:~$ lxc list
+------+-------+------+------+------+-----------+                                                                                                            
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |                                                                                                            
+------+-------+------+------+------+-----------+                                                                                                            
ubuntu@myvps:~$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
|       | f452cda3bccb | no     | ubuntu 16.04 LTS amd64 (release) (20160627) | x86_64 | 138.23MB | Jul 22, 2016 at 2:10pm (UTC) |                           
+-------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+                           
ubuntu@myvps:~$

Some tutorials mention the launch action, which does both init and start. Here is how the command would have looked like,

lxc launch ubuntu:x mycontainer

We are nearing the point where we can start doing interesting things with containers. Let’s see the next blog post!

Permanent link to this article: https://blog.simos.info/playing-around-with-lxd-containers-lxc-on-ubuntu/

Jul 13 2016

How to install LXD containers on Ubuntu on Scaleway

Scaleway, a subsidiary of Online.net, does affordable VPSes and baremetal ARM servers. They became rather well-known when they first introduced those ARM servers.

When you install Ubuntu 16.04 on a Scaleway VPS, it requires some specific configuration (compile ZFS as DKMS module) in order to get LXD. In this post, we see those additional steps to get LXD up and running on a Scaleway VPS.

An issue with Scaleway is that they heavily modify the config of the Linux kernel and you do not get the stock Ubuntu kernel when you install Ubuntu 16.04. There is a feature request to get ZFS compiled into the kernel, at https://community.online.net/t/feature-request-zfs-support/2709/3 Most probably it will take some time to get added.

In this post I do not cover the baremetal ARM or the newer x86 dedicated servers; there is an additional error there in trying to use LXD, an error about not being able to create a sparse file.

Creating a VPS on Scaleway

Once we create an account on Scaleway (we also add our SSH public key), we click to create a VC1 server with the default settings.

scaleway-vc1

There are several types of VPS, we select the VC1 which comes with 2 x86 64-bit cores, 2GB memory and 50GB disk space.

scaleway-do-no-block-SMTP

Under Security, there is a default policy to disable «SMTP». These are firewall rules drop packets destined to ports 25, 465 and 587. If you intend to use SMTP at a later date, it makes sense to disable this security policy now. Otherwise, once you get your VPS running, it takes about 30+30 minutes of downtime to archive and restart your VPS in order for this change to take effect.

scaleway-provisioning

Once you click Create, it takes a couple of minutes for the provisioning, for the kernel to start and then booting of the VPS.

After the creation, the administrative page shows the IP address that we need to connect to the VPS.

Initial package updates and upgrades

$ ssh root@163.172.132.19
The authenticity of host '163.172.132.19 (163.172.132.19)' can't be established.
ECDSA key fingerprint is SHA256:Z4LMCnXUyuvwO16HI763r4h5+mURBd8/4u2bFPLETes.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '163.172.132.19' (ECDSA) to the list of known hosts.
 _
 ___ ___ __ _| | _____ ____ _ _ _
/ __|/ __/ _` | |/ _ \ \ /\ / / _` | | | |
\__ \ (_| (_| | | __/\ V V / (_| | |_| |
|___/\___\__,_|_|\___| \_/\_/ \__,_|\__, |
 |___/

Welcome on Ubuntu Xenial (16.04 LTS) (GNU/Linux 4.5.7-std-3 x86_64 )

System information as of: Wed Jul 13 19:46:53 UTC 2016

System load: 0.02 Int IP Address: 10.2.46.19 
Memory usage: 0.0% Pub IP Address: 163.172.132.19
Usage on /: 3% Swap usage: 0.0%
Local Users: 0 Processes: 83
Image build: 2016-05-20 System uptime: 3 min
Disk nbd0: l_ssd 50G

Documentation: https://scaleway.com/docs
Community: https://community.scaleway.com
Image source: https://github.com/scaleway/image-ubuntu


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@scw-test:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB]
...
Reading package lists... Done
Building dependency tree 
Reading state information... Done
51 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@scw-test:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
 libpython3.5
The following packages will be upgraded:
 apt apt-utils base-files bash bash-completion bsdutils dh-python gcc-5-base
 grep init init-system-helpers libapt-inst2.0 libapt-pkg5.0 libblkid1
 libboost-iostreams1.58.0 libboost-random1.58.0 libboost-system1.58.0
 libboost-thread1.58.0 libexpat1 libfdisk1 libgnutls-openssl27 libgnutls30
 libldap-2.4-2 libmount1 libnspr4 libnss3 libnss3-nssdb libpython2.7-minimal
 libpython2.7-stdlib librados2 librbd1 libsmartcols1 libstdc++6 libsystemd0
 libudev1 libuuid1 lsb-base lsb-release mount python2.7 python2.7-minimal
 systemd systemd-sysv tzdata udev util-linux uuid-runtime vim vim-common
 vim-runtime wget
51 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 27.6 MB of archives.
After this operation, 5,069 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 base-files amd64 9.4ubuntu4.1 [68.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 bash amd64 4.3-14ubuntu1.1 [583 kB]
...
Setting up librados2 (10.2.0-0ubuntu0.16.04.2) ...
Setting up librbd1 (10.2.0-0ubuntu0.16.04.2) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@scw-test:~#

Installing ZFS as a DKMS module

There are instructions on how to install ZFS as a DKMS module at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module

First, we install the build-essential package,

root@scw-test:~# apt install build-essential

Second, we run the script that is provided at https://github.com/scaleway/kernel-tools#how-to-build-a-custom-kernel-module It takes about a minute for this script to run; it downloads the kernel source and prepares the modules for compilation.

Third, we install the zfsutils-linux package as usual. In this case, it takes more time to install, as it needs to recompile the ZFS modules.

root@scw-test:~# apt install zfsutils-linux

This step takes lots of time. Eight and a half minutes!

Installing the LXD package

The final step is to install the LXD package

root@scw-test:~# apt install lxd

Initial configuration of LXD

A VPS at Scaleway does not have access to a separate block device (the dedicated servers do). Therefore, we are creating the ZFS filesystem in a loopback device.

root@scw-test:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda 46G 2.1G 42G 5% /

We have 42GB of free space, therefore let’s allocate 36GB for the ZFS filesystem.

root@scw-test:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: mylxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 36
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
...we accept the defaults in creating the LXD bridge...
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@scw-test:~#

 

Create a user to manage LXD

We create a non-root user to manage LXD. It is advised to create such a user and refrain from using root for such tasks.

root@scw-test:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: *******
Retype new UNIX password: *******
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y
root@scw-test:~#

Then, let’s add this user ubuntu to the sudo (ability to run sudo) and lxd (manage LXD containers) groups,

root@scw-test:~# adduser ubuntu sudo         # For scaleway. For others, the name might be 'admin'.
root@scw-test:~# adduser ubuntu lxd

Finally, let’s restart the VPS. Although it is not necessary, it is a good practice in order to make sure that lxd starts automatically even with ZFS being compiled through DKMS. A shutdown -r now would suffice to restart the VPS. After about 20 seconds, we can ssh again, as the new user ubuntu.

Let’s start up a container

We log in as this new user ubuntu (or, sudo su – ubuntu).

ubuntu@scw-test:~$ lxc launch ubuntu:x mycontainer
Creating mycontainer
Retrieving image: 100%
Starting mycontainer
ubuntu@scw-test:~$ lxc list
+-------------+---------+------+------+------------+-----------+
| NAME        | STATE   | IPV4 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT |         0 |
+-------------+---------+------+------+------------+-----------+
ubuntu@scw-test:~$ lxc list
+-------------+---------+----------------------+------+------------+-----------+
| NAME        | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS |
+-------------+---------+----------------------+------+------------+-----------+
| mycontainer | RUNNING | 10.181.132.19 (eth0) |      | PERSISTENT | 0         |
+-------------+---------+----------------------+------+------------+-----------+
ubuntu@scw-test:~$

We launched an Ubuntu 16.04 LTS (Xenial: “x”) container, and then we listed the details. It takes a few moments for the container to boot up. In the second attempt, the container completed the booting up and also got the IP address.

That’s it! LXD is up and running, and we successfully created a container. See these instructions on how to test the container with a Web server.

Permanent link to this article: https://blog.simos.info/how-to-install-lxd-containers-on-ubuntu-on-scaleway/

Jul 13 2016

Trying out LXD containers on Ubuntu on DigitalOcean, with block storage

We have seen how to try out LXD containers on Ubuntu on DigitalOcean. In this post, we will see how to use the new DigitalOcean block storage support (just out of beta!).

This new block storage has the benefit of being additional separate disk space that should be faster to access. Then, software such as LXD would benefit from this. Without block storage, the ZFS pool for LXD is stored as a loopback file on the ext4 root filesystem. With block storage, the ZFS pool for LXD is stored on the block device of the block storage.

When you start a new droplet, you get by default the ext4 filesystem and you cannot change it easily. Some people managed to hack around this issue, https://github.com/fxlv/docs/blob/master/freebsd/freebsd-with-zfs-digitalocean.md though there are no instructions on how to do with a Linux distribution. The new block storage allows to get ZFS on additional block devices without hacks.

DO-block-storage-early

Actually, this block storage feature is so new that even the DigitalOcean page still asks you to request early access.

DO-block-storage-ready

When you create a VPS, you have now the option to specify additional block storage. The pricing is quite simple, US$0.10 per GB, and you can specify from 1 GB and upwards.

It is also possible to add block storage to an existing VPS. Finally, as shown in the screenshot, block storage is currently available at the NYC1 and SFO2 datacenters.

For our testing, we created an Ubuntu 16.04 $20/month VPS at the SFO2 datacenter. It is a dual-core VPS with 2GB of RAM.

The standard disk is

Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4CF812E3-1423-1923-B28E-FDD6817901CA

Device Start End Sectors Size Type
/dev/vda1 2048 83886046 83883999 40G Linux filesystem

While the block device for the block storage is

Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

Here is how to configure LXD to use the new block device,

root@ubuntu-2gb-sfo2-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: mylxd-pool
Would you like to use an existing block device (yes/no)? yes
Path to the existing block device: /dev/sda
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket

LXD has been successfully configured.

Let’s see some benchmarks! We run bonnie++, first on the standard storage, then on the new block storage,

# bonnie -d /tmp/ -s 4G -n 0 -m STANDARDSTORAGE -f -b -u root

Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
STANDARDSTORAGE 4G 749901 92 611116 80 1200389 76 +++++ +++
Latency 50105us 105ms 7687us 11021us Latency

# bonnie -d /media/blockstorage -s 4G -n 0 -m BLOCKSTORAGE -f -b -u root

Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
BLOCKSTORAGE 4G 193923 23 96283 14 217073 18 2729 58
Latency 546ms 165ms 8882us 35690us Latency

The immediate benefits are that the latency is much lower with the new block storage, and the CPU usage is also low.

Let’s try with dd,

root@ubuntu-2gb-sfo2-01:~# dd if=/dev/zero of=/tmp/standardstorage.img bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.91043 s, 875 MB/s

root@ubuntu-2gb-sfo2-01:~# dd if=/dev/zero of=/media/blockstorage/blockstorage.img bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 19.8969 s, 216 MB/s

On the other hand, the standard storage appears four times faster than the new block storage.

I am not sure how these should be interpreted. I look forward to reading other reports about this.

 

 

Permanent link to this article: https://blog.simos.info/trying-out-lxd-containers-on-ubuntu-on-digitalocean-with-block-storage/

Jun 27 2016

Trying out LXD containers on Ubuntu on DigitalOcean

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it’s interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean

do-create-droplet

When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@128.199.41.205    # change with the IP address you get from the DigitalOcean panel
The authenticity of host '128.199.41.205 (128.199.41.205)' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '128.199.41.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
...
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
lxd:
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
 zfs-initramfs
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
 zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let’s allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

  • we initialized LXD with the ZFS storage backend,
  • we created a new pool and gave a name (here, lxd-pool),
  • we do not have a block device, so we get a (sparse) image file that contains the ZFS filesystem
  • we do not want now to make LXD available over the network
  • we want to configure the LXD bridge for the inter-networking of the containters

Let’s create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by 123.59.134.76 port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
Done.
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.

do-lxc-list

Create a Web server in a container

We launch (init and start) a container named c1.

do-lxd-launch

The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let’s enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
...
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@c1:~#

Let’s install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@c1:~#

Is the Web server running? Let’s check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*
root@c1:~#

The parameters mean

  • -t: Show only TCP sockets
  • -u: Show only UDP sockets
  • -l: Show listening sockets
  • -a: Show all sockets (makes no difference because of previous options; it’s just makes an easier word to remember, tula)

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)
root@c1:~#

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let’s change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html

do-lxd-nginx-page

Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at http://128.199.41.205/ we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 128.199.41.205/32 --dport 80 -j DNAT --to-destination 10.160.152.184:80

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, http://128.199.41.205/ so let’s visit it.

do-lxd-welcome-nginx

That’s it! We created an LXD container with the nginx Web server, then exposed it to the Internet.

 

Permanent link to this article: https://blog.simos.info/trying-out-lxd-containers-on-ubuntu-on-digitalocean/

Jun 25 2016

Trying out LXD containers on our Ubuntu

This post is about containers, a construct similar to virtual machines (VM) but so much lightweight that you can easily create a dozen on your desktop Ubuntu!

A VM virtualizes a whole computer and then you install in there the guest operating system. In contrast, a container reuses the host Linux kernel and simply contains just the root filesystem (aka runtimes) of our choice. The Linux kernel has several features that rigidly separate the running Linux container from our host computer (i.e. our desktop Ubuntu).

By themselves, Linux containers would need some manual work to manage them directly. Fortunately, there is LXD (pronounced Lex-deeh), a service that manages Linux containers for us.

We will see how to

  1. setup our Ubuntu desktop for containers,
  2. create a container,
  3. install a Web server,
  4. test it a bit, and
  5. clear everything up.

Set up your Ubuntu for containers

If you have Ubuntu 16.04, then you are ready to go. Just install a couple of extra packages that we see below. If you have Ubuntu 14.04.x or Ubuntu 15.10, see LXD 2.0: Installing and configuring LXD [2/12] for some extra steps, then come back.

Make sure the package list is up-to-date:

sudo apt update
sudo apt upgrade

Install the lxd package:

sudo apt install lxd

If you have Ubuntu 16.04, you can enable the feature to store your container files in a ZFS filesystem. The Linux kernel in Ubuntu 16.04 includes the necessary kernel modules for ZFS. For LXD to use ZFS for storage, we just need to install a package with ZFS utilities. Without ZFS, the containers would be stored as separate files on the host filesystem. With ZFS, we have features like copy-on-write which makes the tasks much faster.

Install the zfsutils-linux package (if you have Ubuntu 16.04.x):

sudo apt install zfsutils-linux

Once you installed the LXD package on the desktop Ubuntu, the package installation scripts should have added you to the lxd group. If your desktop account is a member of that group, then your account can manage containers with LXD and can avoid adding sudo in front of all commands. The way Linux works, you would need to log out from the desktop session and then log in again to activate the lxd group membership. (If you are an advanced user, you can avoid the re-login by newgrp lxd in your current shell).

Before use, LXD should be initialized with our storage choice and networking choice.

Initialize lxd for storage and networking by running the following command:

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 30
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> You will be asked about the network bridge configuration. Accept all defaults and continue.
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
 LXD has been successfully configured.
$ _

We created the ZFS pool as a filesystem inside a (single) file, not a block device (i.e. in a partition), thus no need for extra partitioning. In the example I specified 30GB, and this space will come from the root (/) filesystem. If you want to look at this file, it is at /var/lib/lxd/zfs.img.

 

That’s it! The initial configuration has been completed. For troubleshooting or background information, see https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/

Create your first container

All management commands with LXD are available through the lxc command. We run lxc with some parameters and that’s how we manage containers.

lxc list

to get a list of installed containers. Obviously, the list will be empty but it verifies that all are fine.

lxc image list

shows the list of (cached) images that we can use to launch a container. Obviously, the list will be empty but it verifies that all are fine.

lxc image list ubuntu:

shows the list of available remote images that we can use to download and launch as containers. This specific list shows Ubuntu images.

lxc image list images:

shows the list of available remote images for various distributions that we can use to download and launch as containers. This specific list shows all sort of distributions like Alpine, Debian, Gentoo, Opensuse and Fedora.

Let’s launch a container with Ubuntu 16.04 and call it c1:

$ lxc launch ubuntu:x c1
Creating c1
Starting c1
$ _

We used the launch action, then selected the image ubuntu:x (x is an alias for the Xenial/16.04 image) and lastly we use the name c1 for our container.

Let’s view our first installed container,

$ lxc list

+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.158 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+

Our first container c1 is running and it has an IP address (accessible locally). It is ready to be used!

Install a Web server

We can run commands in our container. The action for running commands, is exec.

$ lxc exec c1 -- uptime
 11:47:25 up 2 min, 0 users, load average: 0.07, 0.05, 0.04
$ _

After the action exec, we specify the container and finally we type command to run inside the container. The uptime is just 2 minutes, it’s a fresh container :-).

The — thing on the command line has to do with parameter processing of our shell. If our command does not have any parameters, we can safely omit the –.

$ lxc exec c1 -- df -h

This is an example that requires the –, because for our command we use the parameter -h. If you omit the –, you get an error.

Let’s get a shell in the container, and update the package list already.

$ lxc exec c1 bash
root@c1:~# apt update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
...
Hit http://archive.ubuntu.com trusty/universe Translation-en 
Fetched 11.2 MB in 9s (1228 kB/s) 
Reading package lists... Done
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up dpkg (1.17.5ubuntu5.7) ...
root@c1:~# _

We are going to install nginx as our Web server. nginx is somewhat cooler than Apache Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree
...
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
Setting up nginx (1.4.6-1ubuntu3.5) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
root@c1:~# _

Let’s view our Web server with our browser. Remeber the IP address you got 10.173.82.158, so I enter it into my browser.

lxd-nginx

Let’s make a small change in the text of that page. Back inside our container, we enter the directory with the default HTML page.

root@c1:~# cd /var/www/html/
root@c1:/var/www/html# ls -l
total 2
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html
root@c1:/var/www/html#

We can edit the file with nano, then save

lxd-nginx-nano

Finally, let’s check the page again,

lxd-nginx-modified

Clearing up

Let’s clear up the container by deleting it. We can easily create new ones when we need them.

$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
| c1   | RUNNING | 10.173.82.169 (eth0) |      | PERSISTENT | 0            |
+---------+---------+----------------------+------+------------+-----------+
$ lxc stop c1
$ lxc delete c1
$ lxc list
+---------+---------+----------------------+------+------------+-----------+
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
+---------+---------+----------------------+------+------------+-----------+
+---------+---------+----------------------+------+------------+-----------+

We stopped (shutdown) the container, then we deleted it.

That’s all. There are many more ideas on what do with containers. Here are the first steps on setting up our Ubuntu desktop and trying out one such container.

Permanent link to this article: https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/

Mar 27 2015

Πως φτιάχνουμε εύκολα app (webapp) για το Ubuntu Phone!

Θα δούμε πως μπορούμε να φτιάξουμε εύκολα ένα webapp (εφαρμογή/app που εμπεριέχει mobile website) στο Ubuntu Touch. Για τα βήματα αυτά δεν χρειάζεται να έχουμε καν το κινητό.

Ξεκινούμε με το να βρούμε ένα δικτυακό τόπο που διαθέτει μια έκδοση για κινητά (mobile website). Για το παράδειγμά μας, χρησιμοποιούμε το www.real.gr.

Καθώς βλέπουμε το www.real.gr, παρατηρούμε (στο τέλος της σελίδας) να κάνει αναφορά για το RealMobile και τον ιστότοπο http://www.realmobile.gr/ Όταν επισκεφούμε το σύνδεσμο αυτό, βλέπουμε ότι είναι ειδική έκδοση mobile. Αυτό είναι το στοιχείο που αρκεί για να φτιάξουμε το webapp μας!

Για τη δημιουργία ενός απλού webapp για το Ubuntu Touch, θα χρησιμοποιήσουμε τη σελίδα https://developer.ubuntu.com/webapp-generator/ που παράγει το πακέτο click με την εφαρμογή, και μετά πάμε στο https://myapps.developer.ubuntu.com για να προσθέσουμε την εφαρμογή. Ας δούμε τα βήματα με λεπτομέρεια.

1.  Πάμε στο https://developer.ubuntu.com/webapp-generator/ και συνδεόμαστε με το λογαριασμό μας στο Ubuntu One. Αν δεν έχετε ήδη λογαριασμό, πατήστε να δημιουργήσετε. Όταν το κάνετε, θα δείτε μια σελίδα με τίτλο «Create your Webapp package».

2. Συμπληρώνουμε τη σελίδα όπως παρακάτω.

Webapp creator for Ubuntu Phone

Webapp creator for Ubuntu Phone

Συγκεκριμένα,

  1. App name, το όνομα της εφαρμογής. Είναι ένα αλφαριθμητικό που το βλέπει ο χρήστης στη λίστα εφρμογών. Επιλέγουμε κάτι που βοηθάει την ορατότητα της εφαρμογής. Οι εφαρμογές ταξινομούντε αλφαβητικά και έχει νόημα στο παράδειγμά μας να επιλέξουμε κάτι που να ξεκινάει με Real.
  2. Webapp URL, είναι το σύνδεσμος (URL) για το φορητό δικτυακό τόπο. Στο παράδειγμά μας, όταν επισκεπτόμαστε το www.realmobile.gr με τον περιηγητή του Ubuntu μας, μας μεταφέρει αυτόματα στο www.realmobile.gr/msimple όταν ολοκληρωθεί η φόρτωση της σελίδας. Οπότε, επέλεξα να βάλω το δεύτερο URL. Δεν έχει ουσιαστική σημασία ποιο θα βάλουμε, αρκεί να φορτώνει στο φορητό δικτυακό τόπο.
  3. App Icon, το εικονίδιο της εφαρμογής, σε αρχείο PNG και σε διαστάσεις 256×256. Εδώ πήρα το λογότυπο από το real.gr και το επεξεργάστηκα στο GIMP ώστε να παραχθεί η εικόνα

    Λογότυπο Real.gr (256x256)

    Λογότυπο Real.gr (256×256)

  4. App options, οι επιλογές που ταιριάζουν στο δικτυακό τόπο για χρήση ως webapp. Κρατήστε πατημένο το Ctrl για να επιλέξτε με το ποντίκι περισσότερα από ένα. Για τις επιλογές έχουμε:
    1. Store cookies, αν θα αποθηκεύει cookies ώστε να φαίνονται οι σύνδεσμοι που έχουμε επισκεφθεί. Ακόμα, αν ο δικτυακός τόπος χρησιμοποιεί cookies για να αναγνωρίζει τους επισκέπτες, τότε η επιλογή χρειάζεται. Γενικά, όταν στο webapp δεν υπάρχουν πληροφορίες που θέλουμε να διατηρούνται και να γνωρίζει ο δικτυακός τόπος, τότε δεν επιλέγουμε το Store cookies.
    2. Show header, αν θα εμφανίζεται στο πάνω μέρος σε μια μπάρα το όνομα του δικτυακού τόπου.
    3. Show back and forward buttons, αν θα εμφανίζονται στο πάνω μέρος σε μια μπάρα κουμπιά για εμπρός και πίσω. Αν δεν το επιλέγουμε, τότε δεν θα έχουμε δυνατότητα να πάμε πίσω/εμπρός καθώς επισκεπτόμαστε σελίδες στο webapp. Εδώ θα το απενεργοποιούσαμε αν το webapp είναι τέτοιο που δεν χρειάζεται τέτοια κουμπιά, ή αν παρέχει το ίδιο τέτοια κουμπιά.
    4. Run fullscreen, αν το webapp θα τρέχει σε πλήρη οθόνη. Αν δεν το επιλέξουμε, τότε θα φαίνεται η (πάνω) μπάρα κατάστασης του Ubuntu Touch που δείχνει την ώρα, μπαταρία, δίκτυα, κτλ. Αν το επιλέξουμε, τότε το webapp θα λειτουργεί σε πλήρη οθόνη.
  5. Developer namespace, είναι το όνομα χρήστη που έχουμε στο Launchpad/Ubuntu One καθώς φτιάξαμε το λογαριασμό μας. Μπορεί να μπει και ο δικτυακός μας τόπος, αν έχουμε.
  6. Maintainer full name, το όνομά μας.
  7. Maintainer e-mail, το ε-μαίηλ μας.

Όταν συμπληρώσουμε τη σελίδα, πατάμε στο Submit για να δημιουργηθεί το webapp μας. Θα δημιουργηθεί ένα αρχείο με κατάληξη .click το οποίο και αποθηκεύουμε στο δίσκο μας.

Έπειτα, μπαίνουμε στη διαδικασία αποστολής της εφαρμογής στο Ubuntu Store.

Επισκεπτόμαστε στη σελίδα https://myapps.developer.ubuntu.com/ και συνδεόμαστε με το λογαριασμό μας στο Launchpad/Ubuntu One. Θα εμφανιστεί η αρχική σελίδα με τις εφαρμογές μας. Αρχικά θα είναι κενή και θα φαίνεται μόνο το κουμπί New application. Το πατάμε και βλέπουμε τη λίστα με τις βασικές πληροφορίες εφαρμογής.

Συμπλήρωση βασικών στοιχείων εφαρμογής

Συμπλήρωση βασικών στοιχείων εφαρμογής

Συμπληρώνουμε όπως

  1. Your application, εδώ πατάμε το πλήκτρο Select file και επιλέγουμε το αρχείο της εφαρμογής που δημιουργήσαμε προηγουμένως.
  2. Changelog, εδώ γράφουμε τις αλλαγές με την προηγούμενη έκδοση. Μιας και αυτή είναι η πρώτη έκδοση, γράφουμε κάτι τυπικό όπως Initial upload.
  3. Department, εδώ είναι το είδος της εφαρμογής που φτιάξαμε. Μιας και το real.gr είναι ειδησεογραφικό, βάζουμε από τη λίστα News & Magazines.
  4. Support URL, εδώ βάζουμε κάποιο δικτυακό τόπο για υποστήριξη. Μιας και δεν έχουμε επικοινωνήσει με τον ίδιο το δικτυακό τόπο για τη δημιουργία αυτής της εφαρμογής, βάζουμε κάτι δικό μας. Μια καλή επιλογή είναι το ημαιλ μας.
  5. License, η άδεια διάθεσης της εφαρμογής μας. Μια καλή επιλογή είναι GNU GPL v3.

Εκτός από τις βασικές επιλογές, υπάρχουν και προαιρετικές. Συγκεκριμένα,

Συμπλήρωση προαιρετικών στοιχείων εφαρμογής

Συμπλήρωση προαιρετικών στοιχείων εφαρμογής

Συμπληρώνουμε όπως

 

 

  1. Application name, το όνομα της εφαρμογής, όπως το είχαμε βάλει πιο πριν.
  2. Tagline, περιγραφή της εφαρμογής σε μια γραμμή.
  3. Description, περιγραφή της εφαρμογής. Αυτό θα φανεί όταν ο χρήστης εντοπίσει την εφαρμογή μας στο Κατάστημα Ubuntu. Είναι καλό να περιγράψουμε την εφαρμογή μας αρκετά καλά.
  4. Keywords, διάφορες λέξεις-κλειδιά για την εφαρμογή μας. Είναι καλό να βάλουμε αρκετές λέξεις ώστε η εφαρμογή μας να μπορεί να εντοπιστεί εύκολα στις αναζητήσεις.

Στο τέλος της σελίδας υπάρχουν οι τελικές επιλογές,

Συμπλήρωση τελικών προαιρετικών στοιχείων εφαρμογής

Συμπλήρωση τελικών προαιρετικών στοιχείων εφαρμογής

Συγκεκριμένα,

  1. Icon 256, το εικονίδιο που φτιάξαμε πιο πριν. Εδώ μπορεί να χρειάζεται να το επαναλάβουμε αν και το περιέχει το αρχείο της εφαρμογής click.
  2. Screenshots, διάφορα στιγμιότυπα οθόνης από την εφαρμογή μας. Για τώρα είναι κενό. Όταν η εφαρμογή μπει στο Κατάστημα Ubuntu, την ξεκινούμε και τότε λαμβάνουμε στιγμιότυπο (πατάμε ταυτόχρονα Ήχος+/Ήχος- για τη λήψη στιγμιοτύπου).
  3. Application website, δικτυακός τόπος με τον κώδικα της εφαρμογής μας. Εδώ το αφήνουμε κενό.
  4. Price, η τιμή της εφαρμογής μας. Το αφήνουμε στο Make it free.

Πατάμε το κουμπί Submit και αυτό ήταν!

Η εφαρμογή έχει σταλθεί, και αναμένουμε να ελεγχθεί.

Η εφαρμογή έχει σταλθεί, και αναμένουμε να ελεγχθεί.

Εδώ ολοκληρώθηκε η αποστολή των στοιχείων και αναμένουμε να ολοκληρωθεί το review (έλεγχος) της εφαρμογής ώστε να γίνει δεκτή.

Για αρκετές εφαρμογές, ο έλεγχος ολοκληρώνεται άμεσα και το ίδιο συμβαίνει και με την εφαρμογή μας. Οπότε πατάμε στο σύνδεσμο check again για να φορτώσει η σελίδα ξανά.

Η εφαρμογή μας είναι διαθέσιμη στο Κατάστημα Ubuntu!

Η εφαρμογή μας είναι διαθέσιμη στο Κατάστημα Ubuntu!

Και αυτό ήταν! Η εφαρμογή μας είναι διαθέσιμη πια στο Κατάστημα Ubuntu. Αναφέρει Published με το χρώμα της μπίλιας να είναι πράσινο.

Αν έχουμε κινητό με Ubuntu Touch, μπορούμε να εγκαταστήσουμε άμεσα την εφαρμογή.

Μπορούμε να δούμε την εφαρμογή μας στον κατάλογο εφαρμογών στο δικτυακό τόπο https://appstore.bhdouglass.com/apps Εδώ, η ενημέρωση γίνεται κάθε λίγες ώρες, οπότε η εφαρμογή μας θα φανεί μετά από λίγο. Για την παραπάνω εφαρμογή, ο σύνδεσμος με τα στοιχεία είναι https://appstore.bhdouglass.com/app/realmobilegr-bkm.simosx

Οπότε, μπορεί ο καθένας να βρει ένα δικτυακό τόπο που να παρέχει mobile website, και να φτιάξει ένα απλό webapp. Για τους δικτυακούς τόπους στην Ελλάδα, ελάχιστοι έχουν webapp οπότε είναι ευκαιρία να την φτιάξετε εσείς!

 

Permanent link to this article: https://blog.simos.info/%cf%80%cf%89%cf%82-%cf%86%cf%84%ce%b9%ce%ac%cf%87%ce%bd%ce%bf%cf%85%ce%bc%ce%b5-%ce%b5%cf%8d%ce%ba%ce%bf%ce%bb%ce%b1-app-webapp-%ce%b3%ce%b9%ce%b1-%cf%84%ce%bf-ubuntu-phone/

Feb 19 2015

Δεύτερο flash sale για το bq Aquaris E4.5 (ubuntu edition): σήμερα Πέμπτη, 19 Φεβ 2015 (10πμ ώρα Ελλάδας)

Σε λίγα λεπτά, στις 10πμ (ώρα Ελλάδος), ξεκινά το δεύτερο flash sale για το κινητό bq Aquaris E4.5 (ubuntu edition).

Χθτες Τετάρτη στις 10πμ έκαναν δοκιμές και άνοιξαν το σύστημα για αγορές για πολύ λίγη ώρα, χωρίς να το ανακοινώσουν. Ωστόσο, κάποιος (το έγραψε στο g+) εντελώς τυχαία συνδέθηκε και αγόρασε το κινητό ;-).

Ο σύνδεσμος http://ubuntu.bq.com/ φαίνεται να έχει ήδη κολλήσει. Αν δεν μπορείτε να συνδεθείτε, δοκιμάστε και http://www.bq.com/gb/ubuntu.html

Ενημέρωση: Απευθείας σύνδεσμος http://store.bqreaders.com/en/ubuntu-edition-e-4-5

ubuntu-phone-buy

 

 

ubuntu-phone-bought

Permanent link to this article: https://blog.simos.info/%ce%b4%ce%b5%cf%8d%cf%84%ce%b5%cf%81%ce%bf-flash-sale-%ce%b3%ce%b9%ce%b1-%cf%84%ce%bf-bq-aquaris-e4-5-ubuntu-edition-%cf%83%ce%ae%ce%bc%ce%b5%cf%81%ce%b1-%cf%80%ce%ad%ce%bc%cf%80%cf%84%ce%b7-19/

Feb 11 2015

Νέο flash sale για το bq Aquaris E4.5 (ubuntu edition)

bq-flashsale-2

 

Πηγή: https://plus.google.com/+bqreaders/posts/BStjo9FWMfW

Θα υπάρχει νέο flash sale για το bq Aquaris E4.5 (ubuntu edition) σήμερα Τετάρτη στις 16:00 ώρα Ελλάδος.

Κλικ στο http://goo.gl/71tueM για αντίστροφη μέτρηση μέχρι το flash sale.

Permanent link to this article: https://blog.simos.info/%ce%bd%ce%ad%ce%bf-flash-sale-%ce%b3%ce%b9%ce%b1-%cf%84%ce%bf-bq-aquaris-e4-5-ubuntu-edition/

Feb 11 2015

Flash sale, Ubuntu edition (completed)

ubuntu-phone-flash-sale

The first flash sale of the Aquaris E4.5 (Ubuntu edition) has been completed in less than 90 minutes. The website had some technical issues (too many visitors) during the flash sale.

A new flash sale will take place some time in the future.

Permanent link to this article: https://blog.simos.info/flash-sale-ubuntu-edition-completed/

Feb 11 2015

Flash sale για το κινητό bq Aquaris E4.5 (Ubuntu Edition)

Το πρώτο κινητό με το λειτουργικό σύστημα Ubuntu Touch είναι διαθέσιμο από την ισπανική εταιρία bq, και είναι το bq Aquaris E4.5 (Ubuntu edition).

Η διάθεση του κινητού γίνεται μέσω ενός flash sale, που σημαίνει ότι θα είναι διαθέσιμο για αγορά online για συγκεκριμένες ώρες. Αυτή τη στιγμή το flash sale είναι σε εξέλιξη, και συγκεκριμένα το κινητό θα είναι διαθέσιμο μεταξύ 10πμ-7μμ, σήμερα Τετάρτη 11 Φεβρουαρίου 2015.

Κατά την αγορά, ο επισκέπτης καλείται να παίξει ένα παιχνίδι στο Web πρώτα, όπου ουσιαστικά εξοικειώνεται με τη χρήση του κινητού.


Αυτό είναι το κουτί του κινητού.

Αυτό είναι το κινητό (πίσω μέρος), όπου διακρίνεται η κάμερα με το διπλό φλας. Στο πλάι του κινητού είναι η θέσεις για τις δύο SIM (micro SIM).

Στο κάτω μέρος είναι η υποδοχή microUSB, για φόρτιση ή πρόσβαση στα προσωπικά μας αρχεία καθώς συνδέτουμε στον υπολογιστή μας. Φαίνονται και τα ηχεία που είναι αρκετά δυνατά.

Εδώ είναι το κινητό σε λειτουργία στο αγγλικό περιβάλλον.

Αυτό είναι στιγμιότυπο από τις ρυθμίσεις συστήματος του κινητού. Η μετάφραση στα ελληνικά είναι σε εξέλιξη (τώρα γύρω στο %30). Πιστεύω ότι σύντομα θα ολοκληρωθεί.

Ο σύνδεσμος για την αγορά του κινητού είναι: http://www.bq.com/gb/ubuntu.html

 

Permanent link to this article: https://blog.simos.info/flash-sale-%ce%b3%ce%b9%ce%b1-%cf%84%ce%bf-%ce%ba%ce%b9%ce%bd%ce%b7%cf%84%cf%8c-bq-aquaris-e4-5-ubuntu-edition/

Oct 03 2014

Google Speech API now supports Greek in Speech recognition

With the speech recognition that is available in the Google Speech API, it is possible to get voice converted into text.

Here is an example on how to do this for Greek using Ubuntu.

First, lets record a short sample of speech.

$ arecord --format=S16_LE --rate=16000 --duration=3 myvoicerecording.wav

We use the arecord command (package: alsa-utils) that records from the microphone a WAV file with specific format and rate. The duration is set to 3 seconds so that we do not need to press Ctrl+C to stop the recording. The output file is myvoicerecording.wav.

Then, we create an API key in order to access the Google Speech API. We follow the instructions at http://www.chromium.org/developers/how-tos/api-keys in order to enable the Speech API for our Google account. Follow the instructions up to Step 7.

After completing the Step 7, click on Create new Key (in the section Public API Access). You will be prompted to select a type of key.

Among the different types of keys, select Server key. You will be prompted for a IP Address that will work when you access this API. For this testing, you can add your current IP address (if you do not know your IP address, click http://www.whatismyip.com/). Then, copy the API KEY that will be generated. We will use it in the next command.

In order to send our myvoicerecording.wav to the Google Speech API, we use curl:

curl -X POST --data-binary @'myvoicerecording.wav' --header 'Content-Type: audio/l16; rate=16000;' 'https://www.google.com/speech-api/v2/recognize?output=json&lang=el&key=mykey'

Note that the language code (per ISO 639) for Greek is el. More than 40 languages are supported so your language should already be supported. For this to work, you need to replace mykey with your own API key.

The output is

{"result":[]}
{"result":[{"alternative":[{"transcript":"1 2 3 4 5","confidence":0.92287904},{"transcript":"ένα δύο τρία τέσσερα πέντε"},{"transcript":"1 2 3 4 και 5"},{"transcript":"ένα δυο τρία τέσσερα πέντε"}],"final":true}],"result_index":0}

which is the correct output for the voice recording that counted from one to five.

These steps show how to test the Speech API.

Ideally you would write a program to make use of the Speech API. At http://www.noobslab.com/2014/06/control-your-ubuntulinux-mint-system.html shows a set of scripts that adds voice recognition to Ubuntu that makes use of the Google Speech API.

 

 

Permanent link to this article: https://blog.simos.info/google-speech-api-now-supports-greek-in-speech-recognition/

%d bloggers like this: