Dec 14 2017

multipass, management of virtual machines running Ubuntu

If you want to run a machine container, you would use LXD.  But if you want to run a virtual machine, you would use multipass. multipass is so new, that is still in beta. The name is not known yet to Google, and you get many weird results when you search for it.

Both containers and virtual machines, you can set them up manually without much additional tools. However, if you want to perform real work, it helps if you have a system that supports you. Let’s see what multipass can do for us.

Installing the multipass snap

multipass is available as a snap package. You need a Linux distribution, and the Linux distribution has to have snap support.

Check the availability of multipass as a snap,

$ snap info multipass
name: multipass
summary: Ubuntu at your fingertips
publisher: saviq
description: |
 Multipass gives you Ubuntu VMs in seconds. Just run `multipass.ubuntu create`
 and
 it'll do all the setup for you.
snap-id: mA11087v6dR3IEcQLgICQVjuvhUUBUKM
channels: 
 stable: – 
 candidate: – 
 beta: 2017.2.2 (37) 44MB classic
 edge: 2017.2.2-4-g691449f (38) 44MB classic

There is a snap available, and is currently in the beta channel. It is a classic snap which means that it has less restrictions that your typical snap.

Therefore, install it as follows,

$ sudo snap install multipass --beta --classic
multipass (beta) 2017.2.2 from 'saviq' installed

Trying out multipass

Now what? Let’s run it.

$ multipass
Usage: /snap/multipass/37/bin/multipass [options] <command>
Create, control and connect to Ubuntu instances.

This is a command line utility for multipass, a
service that manages Ubuntu instances.

Options:
 -h, --help Display this help
 -v, --verbose Increase logging verbosity, repeat up to three times for more
 detail

Available commands:
 connect Connect to a running instance
 delete Delete instances
 exec Run a command on an instance
 find Display available images to create instances from
 help Display help about a command
 info Display information about instances
 launch Create and start an Ubuntu instance
 list List all available instances
 mount Mount a local directory in the instance
 purge Purge all deleted instances permanently
 recover Recover deleted instances
 start Start instances
 stop Stop running instances
 umount Unmount a directory from an instance
 version Show version details
Exit 1

Just like with LXD, launch should do something. Let’s try it and see what parameters it takes.

$ multipass launch
Launched: talented-pointer

Oh, no. Just like with LXD, if you do not supply a name of the container/virtual machine, they pick one for you AND proceed in creating the container/virtual machine. So, here we are with the virtual machine creatively named talented-pointer.

How do we get some more info about this virtual machine? What defaults were selected?

$ multipass info talented-pointer
Name: talented-pointer
State: RUNNING
IPv4: 10.122.122.2
Release: Ubuntu 16.04.3 LTS
Image hash: a381cee0aae4 (Ubuntu 16.04 LTS)
Load: 0.08 0.12 0.07
Disk usage: 1014M out of 2.1G
Memory usage: 37M out of 992M

The default image is Ubuntu 16.04.3, on a 2GB disk and with 1GB RAM.

How should we have created the virtual machine instead?

$ multipass launch --help
Usage: /snap/multipass/37/bin/multipass launch [options] [<remote:>]<image>
Create and start a new instance.

Options:
 -h, --help Display this help
 -v, --verbose Increase logging verbosity, repeat up to three times for
 more detail
 -c, --cpus <cpus> Number of CPUs to allocate
 -d, --disk <disk> Disk space to allocate in bytes, or with K, M, G suffix
 -m, --mem <mem> Amount of memory to allocate in bytes, or with K, M, G
 suffix
 -n, --name <name> Name for the instance
 --cloud-init <file> Path to a user-data cloud-init configuration

Arguments:
 image Ubuntu image to start

Therefore, the default command to launch a new instance would have looked like

$ multipass launch --disk 2G --mem 1G -n talented-pointer

We still do not know how to specify the image name, whether it will be Ubuntu 16.04 or something else. saviq replied, and now we know how to get the list of available images for multipass.

$ multipass find
multipass launch … Starts an instance of Image version
----------------------------------------------------------
14.04 Ubuntu 14.04 LTS 20171208
 (or: t, trusty)
16.04 Ubuntu 16.04 LTS 20171208
 (or: default, lts, x, xenial)
17.04 Ubuntu 17.04 20171208
 (or: z, zesty)
17.10 Ubuntu 17.10 20171213
 (or: a, artful)
daily:18.04 Ubuntu 18.04 LTS 20171213
 (or: b, bionic, devel)

multipass merges the CLI semantics of both the lxc and the snap clients :-).

That is, there are five images currently available and each has several handy aliases. And currently, the default and the lts point to Ubuntu 16.04. In spring 2018, they will point to Ubuntu 18.04 when it gets released.

Here is the list of aliases in an inverted table.

Ubuntu 14.04: 14.04, t, trusty

Ubuntu 16.04: 16.04, default, lts, x, xenial (at the end of April 2018, it will lose the default and lts aliases)

Ubuntu 17.04: 17.04, z, zesty

Ubuntu 17.10: 17.10, a, artful

Ubuntu 18.04: daily:18.04, daily:b, daily:bionic, daily:devel (at the end of April 2018, it will gain the default and lts aliases)

Therefore, if we want to launch a 8G disk/2GB RAM virtual machine myserver with, let’s say, the current LTS Ubuntu, we would explicitly run

$ multipass launch --disk 8G --mem 2G -n myserver lts

Looking into the lifecycle of a virtual machine

When you first launch a virtual machine for a specific version of Ubuntu, it will download from the Internet the image of the virtual machine, and then cache it locally for any future virtual machines. This happened earlier when we launched talented-pointer. Let’s view it.

$ multipass list
Name State IPv4 Release
talented-pointer RUNNING 10.122.122.2 Ubuntu 16.04 LTS

Now delete it, then purge it.

$ multipass delete talented-pointer
$ multipass list
Name State IPv4 Release
talented-pointer DELETED --
$ multipass purge
$ multipass list
No instances found.

That is, we have a second chance when we delete a virtual machine. A deleted virtual machine can be recovered with multipass recover.

Let’s create a new virtual machine and time it.

$ time multipass launch -n myVM default
Launched: myVM


Elapsed time : 0m16.942s
User mode : 0m0.008s
System mode : 0m0.016s
CPU percentage : 0.14

It took about 17 seconds for a virtual machine. In contrast, a LXD container takes significantly less,

$ time lxc launch ubuntu:x mycontainer
Creating mycontainer
Starting mycontainer


Elapsed time : 0m1.943s
User mode : 0m0.008s
System mode : 0m0.024s
CPU percentage : 1.64

We can stop and start a virtual machine with multipass.

$ multipass list 
Name State IPv4 Release
myVM RUNNING 10.122.122.2 Ubuntu 16.04 LTS

$ multipass stop myVM
 
$ multipass list
Name State IPv4 Release
myVM STOPPED -- Ubuntu 16.04 LTS

$ multipass start
Name argument or --all is required
Exit 1
 
$ time multipass start --all
Elapsed time : 0m11.109s
User mode : 0m0.008s
System mode : 0m0.012s
CPU percentage : 0.18

We can start and stop virtual machines, and if we do not want to specify a name, we can use –all (to perform a task to all). Here it took 11 seconds to restart the virtual machine. The time it takes to start a virtual machine is somewhat variable and on my system it is in the tens of seconds. For LXD containers, it is about two seconds or less.

Running commands in a VM with Multipass

From what we saw earlier from multipass –help, there are two actions, connect and exec.

Here is connect to a VM.

$ multipass connect myVM
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-103-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

5 packages can be updated.
3 updates are security updates.


Last login: Thu Dec 14 20:19:45 2017 from 10.122.122.1
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@myVM:~$

Therefore, with connect, we get a shell directly to the virtual machine! Because this is a virtual machine, it booted a new Linux kernel, Linux 4.4.0 in parallel with the one I use on my Ubuntu system. There are 5 packages that can be updated, and 3 of them are security updates.  Nowadays in Ubuntu, any pending security updates are autoinstalled by default thanks to the unattended-upgrades package and its default configuration. They will get autoupdated sometime within the day and the default configuration will automatically do the security updates only.

We view the available updates, five in total, three are security updates.

ubuntu@myVM:~$ sudo apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease 
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB] 
Get:4 http://security.ubuntu.com/ubuntu xenial-security/main Sources [104 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB] 
Get:6 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [48.9 kB]
Get:7 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [408 kB]
Get:8 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [179 kB]
Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [98.9 kB]
Fetched 1,145 kB in 0s (1,181 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
5 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@myVM:~$ apt list --upgradeable
Listing... Done
cloud-init/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1]
grub-legacy-ec2/xenial-updates 17.1-46-g7acc9e68-0ubuntu1~16.04.1 all [upgradable from: 17.1-27-geb292c18-0ubuntu1~16.04.1]
libssl1.0.0/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9]
libxml2/xenial-updates,xenial-security 2.9.3+dfsg1-1ubuntu0.5 amd64 [upgradable from: 2.9.3+dfsg1-1ubuntu0.4]
openssl/xenial-updates,xenial-security 1.0.2g-1ubuntu4.10 amd64 [upgradable from: 1.0.2g-1ubuntu4.9]
ubuntu@myVM:~$

Let’s update them all and get done with it.

ubuntu@myVM:~$ sudo apt upgrade
Reading package lists... Done
...ubuntu@myVM:~$

Can we reboot the virtual machine with the shutdown command?

ubuntu@myVM:~$ sudo shutdown -r now

$ multipass connect myVM
terminate called after throwing an instance of 'std::runtime_error'
 what(): ssh: Connection refused
Aborted (core dumped)
Exit 134

$ multipass connect myVM
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-104-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
 http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


Last login: Thu Dec 14 20:40:10 2017 from 10.122.122.1
ubuntu@myVM:~$ exit

Yes, we can. It takes a few seconds for the virtual machine to boot again. When we try to connect too early, we get an error. We try again and get connect.

There is the exec action as well. Let’s see how it works.

$ multipass exec myVM pwd
/home/ubuntu

$ multipass exec myVM id
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),109(netdev),110(lxd)

We specify the VM name, then the command to run. The default user is the ubuntu user (non-root, can sudo without passwords). In contrast, with LXD the default user is root.

Let’s try something else, uname -a.

$ multipass exec myVM uname -a
Unknown option 'a'.
Exit 1

It is a common Unix shell issue, the shell passes the -a parameter to multipass instead of leaving it unprocessed so that it runs in the virtual machine. The solution is to add at the point we want the shell to stop processing parameters, like in

$ multipass exec myVM -- uname -a
Linux myVM 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

If you try to exec commands a few times, you should encounter a case where the command hangs. It does not return, and you cannot Ctrl-C it either. It’s a bug, and the workaround is to open another shell in order to multipass stop myVM and then multipass start myVM.

Conclusion

It is cool to have multipass that complements LXD. Both tools make it easy to create virtual machines and machine containers. There are some bugs and usability issues than can be reported at the Issues page. Overall, it makes running virtual machines and machine containers so usable and easy.

 

Permanent link to this article: https://blog.simos.info/multipass-management-of-virtual-machines-running-ubuntu/

Dec 05 2017

How to migrate LXD from DEB/PPA package to Snap package

You are using LXD from a Linux distribution package and you would like to migrate your existing installation to the Snap LXD package. Let’s do the migration together!

This post is not about live container migration in LXD. Live container migration is about moving a running container from one LXD server to another.

If you do not have LXD installed already, then look for another guide about the installation and set up of LXD from a snap package. A fresh installation of LXD as a snap package is easy.

Note that from the end of 2017, LXD will be generally distributed as a Snap package. If you run LXD 2.0.x from Ubuntu 16.04, you are not affected by this.

Prerequisites

Let’s check the version of LXD (Linux distribution package).

$ lxd --version
2.20

$ apt policy lxd
lxd:
 Installed: 2.20-0ubuntu4~16.04.1~ppa1
 Candidate: 2.20-0ubuntu4~16.04.1~ppa1
 Version table:
*** 2.20-0ubuntu4~16.04.1~ppa1 500
      500 http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu xenial/main amd64 Packages
      100 /var/lib/dpkg/status
    2.0.11-0ubuntu1~16.04.2 500
      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
    2.0.2-0ubuntu1~16.04.1 500
      500 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages
    2.0.0-0ubuntu4 500
      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

In this case, we run LXD version 2.20, and it was installed from the LXD PPA repository.

If you did not enable the LXD PPA repository, you would have an LXD version 2.0.x, the version that was released with Ubuntu 16.04 (what is running above). LXD version 2.0.11 is currently the default version for Ubuntu 16.04.3 and will be supported in that form until 2016 + 5 = 2021. LXD version 2.0.0 is the original LXD version in Ubuntu 16.04 (when original released) and LXD version 2.0.2 is the security update of that LXD 2.0.0.

We are migrating to the LXD snap package. Let’s see how many containers will be migrated.

$ lxc list | grep RUNNING | wc -l
6

It would be a good test to check if something goes horribly wrong.

Let’s check the available incoming LXD snap packages.

$ snap info lxd
name: lxd
summary: System container manager and API
publisher: canonical
contact: https://github.com/lxc/lxd/issues
description: |
 LXD is a container manager for system containers.
 
 It offers a REST API to remotely manage containers over the network, using an
 image based workflow and with support for live migration.
 
 Images are available for all Ubuntu releases and architectures as well as for
 a wide number of other Linux distributions.
 
 LXD containers are lightweight, secure by default and a great alternative to
 virtual machines.
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
channels: 
 stable: 2.20 (5182) 44MB -
 candidate: 2.20 (5182) 44MB -
 beta: ↑ 
 edge: git-b165982 (5192) 44MB -
 2.0/stable: 2.0.11 (4689) 20MB -
 2.0/candidate: 2.0.11 (4770) 20MB -
 2.0/beta: ↑ 
 2.0/edge: git-03e9048 (5131) 19MB -

There are several channels to choose from. The stable channel has LXD 2.20, just like the candidate channel. When the LXD 2.21 snap is ready, it will first be released in the candidate channel and stay there for 24 hours. If everything goes well, it will get propagated to the stable channel. LXD 2.20 was released some time ago, that’s why both channels have the same version (at the time of writing this blog post).

There is the edge channel, which has the auto-compiled version from the git source code repository. It is handy to use this channel if you know that a specific fix (that affects you) has been added to the source code, and you want to verify that it actually fixed the issue. Note that the beta channel is not used, therefore it inherits whatever is found in the channel below; the edge channel.

Finally, there are these 2.0/ tagged channels that correspond to the stock 2.0.x LXD versions in Ubuntu 16.04. It looks that those who use the 5-year supported LXD (because Ubuntu 16.04) have the option to switch to a snap version after all.

Installing the LXD snap

Install the LXD snap.

$ snap install lxd
lxd 2.20 from 'canonical' installed

Migrating to the LXD snap

Now, the LXD snap is installed, but the DEB/PPA package LXD is the one that is running. We need to run the migration script lxd.migrate that will move the data from the DEB/PPA version over to the Snap version of LXD. In practical terms, it will move files from /var/lib/lxd (old DEB/PPA LXD location), to

$ sudo lxd.migrate 
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks

=== Source server
LXD version: 2.20
LXD PID: 4414
Resources:
 Containers: 6
 Images: 3
 Networks: 1
 Storage pools: 1

=== Destination server
LXD version: 2.20
LXD PID: 30329
Resources:
 Containers: 0
 Images: 0
 Networks: 0
 Storage pools: 0

The migration process will shut down all your containers then move your data to the destination LXD.
Once the data is moved, the destination LXD will start and apply any needed updates.
And finally your containers will be brought back to their previous state, completing the migration.

Are you ready to proceed (yes/no) [default=no]? yes
=> Shutting down the source LXD
=> Stopping the source LXD units
=> Stopping the destination LXD unit
=> Unmounting source LXD paths
=> Unmounting destination LXD paths
=> Wiping destination LXD clean
=> Moving the data
=> Moving the database
=> Backing up the database
=> Opening the database
=> Updating the storage backends
=> Starting the destination LXD
=> Waiting for LXD to come online

=== Destination server
LXD version: 2.20
LXD PID: 2812
Resources:
 Containers: 6
 Images: 3
 Networks: 1
 Storage pools: 1

The migration is now complete and your containers should be back online.
Do you want to uninstall the old LXD (yes/no) [default=no]? yes

All done. You may need to close your current shell and open a new one to have the "lxc" command work.

Testing the migration to the LXD snap

Let’s check that the containers managed to start successfully,

$ lxc list | grep RUNNING | wc -l
6

But let’s check that we can still run Firefox from an LXD container, according to the following post,

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Yep, all good. The artifact in the middle (over the c in packaged) is the mouse cursor in wait mode, while GNOME Screenshot is about to take the screenshot. I did not find a report about that in the GNOME Screenshot bugzilla. It is a minor issue and there are several workarounds (1. try one more time, 2. use timer screenshot).

Let’s do some actual testing,

Yep, works as well.

Exploring the LXD snap commands

Let’s type lxd and press Tab.

$ lxd<Tab>
lxd lxd.check-kernel lxd.migrate 
lxd.benchmark lxd.lxc

There are two commands left to try out, lxd.check-kernel and lxd.benchmark. The snap package is called lxd, therefore any additional commands are prepended with lxd.. lxd is the actually LXD server executable. lxd.lxc is the lxc command that we are using for all LXD actions. The LXD snap package makes the appropriate symbolic link so that we just need to write lxc instead of lxd.lxc.

Trying out lxd.check-kernel

Let’s run lxd.check-kernel.

$ sudo lxd.check-kernel
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /lib/modules/4.10.0-40-generic/build/.config
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled

--- Control groups ---
Cgroups: enabled

Cgroup v1 mount points: 
/sys/fs/cgroup/systemd
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/freezer
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/memory
/sys/fs/cgroup/devices
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/blkio

Cgroup v2 mount points:


Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Macvlan: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Vlan: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Bridges: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
Advanced netfilter: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NF_NAT_IPV4: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NF_NAT_IPV6: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabledmodprobe: ERROR: missing parameters. See -h.
, not loadedCONFIG_NETFILTER_XT_MATCH_COMMENT: enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded
FUSE (for use with lxcfs): enabledmodprobe: ERROR: missing parameters. See -h.
, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /snap/lxd/5182/bin/lxc-checkconfig

This is an important tool if you have issues in getting the LXD to run. In this example in the Misc section, it shows some errors about missing parameters. I suppose they are issues with the tool as the appropriate kernel modules are indeed loaded. My installation of the LXD snap works okay.

Trying out lxd.benchmark

Let’s try out the command without parameters.

$ lxd.benchmark 
Usage: lxd-benchmark launch [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT]
 lxd-benchmark start [--parallel=COUNT]
 lxd-benchmark stop [--parallel=COUNT]
 lxd-benchmark delete [--parallel=COUNT]

--count (= 100)
 Number of containers to create
 --freeze (= false)
 Freeze the container right after start
 --image (= "ubuntu:")
 Image to use for the test
 --parallel (= -1)
 Number of threads to use
 --privileged (= false)
 Use privileged containers
 --report-file (= "")
 A CSV file to write test file to. If the file is present, it will be appended to.
 --report-label (= "")
 A label for the report entry. By default, the action is used.
 --start (= true)
 Start the container after creation

error: A valid action (launch, start, stop, delete) must be passed.
Exit 1

It is a benchmark tool that allows to create many containers. We can then use the tool to remove those containers. There is an issue with the default number of containers, 100, which is too high. If you run lxd-benchmark launch without specifying a smaller count,  you will mess up your LXD installation because you will run out of memory and maybe of disk space. Looking for a bug report… Okay it got buried into this pull request https://github.com/lxc/lxd/pull/3857 and needs to re-open. Ideally, the default count should be 1, and let the user knowingly select a bigger number. TODO. Here is the new pull request, https://github.com/lxc/lxd/pull/4074

Let’s try carefully lxd-benchmark.

$ lxd.benchmark launch --count 3
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version: 0.6.5.9-2
 Container backend: lxc
 Container version: 2.1.1

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 0
 Batch size: 4
 Remainder: 3

[Dec 5 13:24:26.044] Found image in local store: 5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62
[Dec 5 13:24:26.044] Batch processing start
[Dec 5 13:24:28.817] Batch processing completed in 2.773s

It took just 2.8s to launch then on this computer.
lxd-benchmark
launched 3 containers, with names benchmark-%d. Obviously, refrain from using the word benchmark as a name for your own containers. Let’s see these containers

$ lxc list --columns ns4
+---------------+---------+----------------------+
| NAME          | STATE   | IPV4                 |
+---------------+---------+----------------------+
| benchmark-1   | RUNNING | 10.52.251.121 (eth0) |
+---------------+---------+----------------------+
| benchmark-2   | RUNNING | 10.52.251.20 (eth0)  |
+---------------+---------+----------------------+
| benchmark-3   | RUNNING | 10.52.251.221 (eth0) |
+---------------+---------+----------------------+
...

Let’s stop them, and finally remove them.

$ lxd.benchmark stop
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version: 0.6.5.9-2
 Container backend: lxc
 Container version: 2.1.1

[Dec 5 13:31:16.517] Stopping 3 containers
[Dec 5 13:31:16.517] Batch processing start
[Dec 5 13:31:20.159] Batch processing completed in 3.642s

$ lxd.benchmark delete
Test environment:
 Server backend: lxd
 Server version: 2.20
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.10.0-40-generic
 Storage backend: zfs
 Storage version: 0.6.5.9-2
 Container backend: lxc
 Container version: 2.1.1

[Dec 5 13:31:24.902] Deleting 3 containers
[Dec 5 13:31:24.902] Batch processing start
[Dec 5 13:31:25.007] Batch processing completed in 0.105s

Note that the lxd-benchmark actions follow the naming of the lxc actions (launch, start, stop and delete).

Troubleshooting

Error “Target LXD already has images”

$ sudo lxd.migrate 
=> Connecting to source server
=> Connecting to destination server
=> Running sanity checks
error: Target LXD already has images, aborting.
Exit 1

This means that the snap version of LXD has some images and it is not clean. lxd.migrate requires the snap version of LXD to be clean. Solution: remove the LXD snap and install again.

$ snap remove lxd
lxd removed

$ snap install lxd
lxd 2.20 from 'canonical' installed

Which “lxc” command am I running?

This is the lxc command of the DEB/PPA package,

$ which lxc
/usr/bin/lxc

This is the lxc command from the LXD snap package.

$ which lxc
/snap/bin/lxc

If you installed the LXD snap but you do not see the the /snap/bin/lxc executable, it could be an artifact of your Unix shell. You may have to close that shell window and open a new one.

Error “bash: /usr/bin/lxc: No such file or directory”

If you get the following,

$ which lxc
/snap/bin/lxc

but the lxc command is not found,

$ lxc
bash: /usr/bin/lxc: No such file or directory
Exit 127

then you must close the terminal window and open a new one.

Note: if you loudly refuse to close the current terminal window, you can just type

$ hash -r

which will refresh the list of executables from the $PATH. Applies to bash, zsh. Use rehash if on *csh.

 

Permanent link to this article: https://blog.simos.info/how-to-migrate-lxd-from-deb-ppa-package-to-snap-package/

Dec 02 2017

How to set the timezone in LXD containers

See https://blog.simos.info/trying-out-lxd-containers-on-our-ubuntu/ on how to set up and test LXD on Ubuntu (or another Linux distribution).

In this post we see how to set up the timezone in a newly created container.

The problem

The default timezone for a newly created container is Etc/UTC, which is what we used to call Greenwich Mean Time.

Let’s observe.

$ lxc launch ubuntu:16.04 mycontainer
Creating mycontainer
Starting mycontainer

$ lxc exec mycontainer -- date
Sat Dec 2 11:40:57 UTC 2017

$ lxc exec mycontainer -- cat /etc/timezone 
Etc/UTC

That is, the observed time in a container follows a timezone that is different from the vast majority our computer settings. When we connect with a shell inside the container, the time and date is not the same with that of our computer.

The time is recorded correctly inside the container, it is just the way it is presented, that is off by a few hours.

Depending on our use of the container, this might or might not be an issue to pursue.

The workaround

We can set the environment variable TZ (for timezone) of each container to our preferred timezone setting.

$ lxc exec mycontainer -- date
Sat Dec 2 11:50:37 UTC 2017

$ lxc config set mycontainer environment.TZ Europe/London

$ lxc exec mycontainer -- date
Sat Dec 2 11:50:50 GMT 2017

That is, we use the lxc config set action to set, for mycontainer,  the environment variable TZ to the proper timezone (here, Europe/London). UTC time and Europe/London time happen to be the same during the winter.

How do we unset the container timezone and return back to Etc/UTC?

$ lxc config unset mycontainer environment.TZ

Here we used the lxc config unset action to unset the environment variable TZ.

The solution

LXD supports profiles and you can edit the default profile in order to get the timezone setting automatically applied to any containers that follow this profile. Let’s get a list of the profiles.

$ lxc profile list
+---------+---------+
| NAME    | USED BY |
+---------+---------+
| default |       7 |
+---------+---------+

Only one profile, called default. It is used by 7 containers already on this LXD installation.

We set the environment variable TZ in the profile with the following,

$ lxc exec mycontainer -- date
Sat Dec 2 12:02:37 UTC 2017

$ lxc profile set default environment.TZ Europe/London

$ lxc exec mycontainer -- date
Sat Dec 2 12:02:43 GMT 2017

How do we unset the profile timezone and get back to Etc/UTC?

lxc profile unset default environment.TZ

Here we used the lxc profile unset action to unset the environment variable TZ.

 

Permanent link to this article: https://blog.simos.info/how-to-set-the-timezone-in-lxd-containers/

Nov 15 2017

New Firefox stats, how will they change?

Firefox 57 has been released and it has quite a few features that make it a very compelling browser to use. Most importantly, Firefox 57 has a new browser engine that makes the best use of a multi-core/multi-threaded CPU.

Therefore, let’s write down the browser market share stats of today, and compare again in November 2018.

Source: NetMarketShare, Desktop Browser Market Share, October 2017

 

Source: StatCounter, Desktop Browser Market Share Worldwide, Oct 2016 – Oct 2017

Chrome is at 63.6%, Firefox is at 13.04%.

Source: Wikipedia, Usage share of web browsers Summary Table

 

Source: Statista, Market share held by leading desktop internet browsers in the United States from January 2015 to November 2017

Permanent link to this article: https://blog.simos.info/new-firefox-stats-how-will-they-change/

Nov 07 2017

How to use Sysdig and Falco with LXD containers

Sysdig (.org) is an open-source container troubleshooting tool and it works by capturing system calls and events directly from the Linux kernel.

When you install Sysdig, it adds a new kernel module that it uses to collect all those system calls and events. That is, compared to other tools like strace, lsof and htop, it gets the data directly from the kernel and not from /proc. In terms of functionality, it is a single tool that can do what strace + tcpdump + htop + iftop + lsof + wireshark do together.

An added benefit of Sysdig is that it understands Linux Containers (since 2015). Therefore, it is quite useful when we want to figure out what is going on in our LXD containers.

Once we get used to Sysdig, we can venture to the companion tool called Falco, a tool for container security. Both are GPL v2 licensed, though you need to sign a CLA in order to contribute to the projects (hosted on Github).

Installing Sysdig

We are installing Sysdig on the host (where LXD is running) and not in any of our containers. In this way, we have full visibility of the whole system.

You can get the installation instructions at https://www.sysdig.org/install/ which essentially amount to a single curl | sh command:

curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

Ubuntu and Debian already have a version of Sysdig, however it is a bit older than the one you get from the command above. Currently, the version in the universe repository is 0.8, while from the command above you get 0.19.

Running Sysdig

If we run sysdig without any filters, it will show us all system messages and events. It’s a never-ending waterfall, and you need to Ctrl+C to stop it.

We can run instead the Curses version called csysdig:

You can go through the examples at https://github.com/draios/sysdig/wiki/Sysdig-Examples In this post, we look into the section that relates to containers.

Specifically,

View the list of containers running on the machine and their resource usage

sudo csysdig -vcontainers

There are six LXD containers, though we do not see their names. The LXD container names are shown under a column further to the right, therefore we would need to use the arrow keys to move to the right. Let’s select the container we are interested in, and press Enter.

We selected the container guiapps and here are the processes inside this container.

View the list of processes with container context

sudo csysdig -pc

This command shows all the container processes together.  That is, they have the container context.

View the CPU usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_cpu container.name=guiapps

Here we switch from csysdig to sysdig. An issue is that these two tools do not have the same parameters.

We have a container called guiapps and we asked sysdig to show the CPU usage of the processes, sorted. The container is idle, therefore all are 0%.

View the network bandwidth usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_net container.name=guiapps

Here it shows the current network traffic inside the container, sorted by traffic. If there is no traffic, the list is empty. Therefore, it is just good to give you an indication of what is happening.

 

View the top files in terms of I/O bytes inside the guiapps container

sudo sysdig -pc -c topfiles_bytes container.name=guiapps

View the top network connections inside the guiapps container

sudo sysdig -pc -c topconns container.name=guiapps

The output is similar to tcpdump, showing the IP addresses of source and destination.

Show all the interactive commands executed inside the guiapps container

sudo sysdig -pc -c spy_users container.name=guiapps

The output looks like this,

29756 17:10:57 root@guiapps) groups 
29756 17:10:57 root@guiapps) /bin/sh /usr/bin/lesspipe
 29756 17:10:57 root@guiapps) basename /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dirname /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dircolors -b
29756 17:11:07 root@guiapps) ls --color=auto
29756 17:11:24 root@guiapps) ping 8.8.8.8
29756 17:11:38 root@guiapps) ifconfig

The commands in italics are the commands that were recorded when running lxc exec. The rest are the commands I typed in the container (ls, ping 8.8.8.8, ifconfig and finally exit which does not get shown). Commands that come from the shell (like pwd, exit) are not visible since they do not execv some command.

Installing Falco

We have already installed Sysdig using the curl | sh method that added their repository. Therefore, to install Falco, we just need to

sudo apt-get install falco

Falco needs its own kernel module,

$ sudo dkms status
falco, 0.8.1, 4.10.0-38-generic, x86_64: installed
sysdig, 0.19.1, 4.10.0-38-generic, x86_64: installed

Upon installation, it adds some default rules in /etc/falco. These rules are about application behaviour that Falco will be inspecting and reporting to us. Therefore, Falco is ready to go. If we need something specific, we would need to add our rules in /etc/falco/falco_rules.local.yaml

Running Falco

Let’s run falco.

$ sudo falco container.name = guiapps
Tue Nov 7 17:41:41 2017: Falco initialized with configuration file /etc/falco/falco.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.local.yaml
17:41:52.933145895: Notice Unexpected setuid call by non-sudo, non-root program (user=nobody parent=<NA> command=lxd forkexec guiapps /var/lib/lxd/containers /var/log/lxd/guiapps/lxc.conf -- env USER=root HOME=/root TERM=xterm-256color PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin LANG=C.UTF-8 -- cmd sudo --user ubuntu --login uid=root)
17:41:52.938956110: Notice A shell was spawned in a container with an attached terminal (user=user guiapps (id=guiapps) shell=bash parent=sudo cmdline=bash terminal=34842)
17:41:58.583366422: Notice A shell was spawned in a container with an attached terminal (user=root guiapps (id=guiapps) shell=bash parent=su cmdline=bash terminal=34842)

We specified that we want to focus only on the container with the name guiapps.

The lines in italics are the startup lines. The two lines on 17:41:52 are the result of the lxc exec guiapps -- sudo --user ubuntu --login. The next line is the result of sudo su.

Conclusion

Both Sysdig (troubleshooting) and Falco (monitoring) are useful tools that are aware of containers. Their default use is quite handy to troubleshoot and monitor containers. These tools have more much features and the ability to add scripts to them (called chisels) to do more even more advanced stuff.

For more resources, check their respective home page.

Permanent link to this article: https://blog.simos.info/how-to-use-sysdig-and-falco-with-lxd-containers/

Nov 06 2017

How to install a Node.js app in a LXD container

Update #1: Added working screenshot and some instructions.

Update #2: Added instructions on how to get the app to autostart through systemd.

Installing a Node.js app on your desktop Linux computer is a messy affair, since you need to add a new repository and install lots of additional packages.

The alternative to messing up your desktop Linux, is to create a new LXD (LexDee) container, and install into that. Once you are done with the app, you can simply delete the container and that’s it. No trace whatsoever and you keep your clean Linux installation.

First, see how to setup LXD on your Ubuntu desktop. There are extra resources if you want to install LXD on other distributions.

Second, for this post we are installing chalktalk, a Node.js app that turns your browser into an interactive blackboard. Nothing particular about Chalktalk, it just appeared on HN and it looks interesting.

Here is what we are going to see today,

  1. Create a LXD container
  2. Install Node.js in the LXD container
  3. Install Chalktalk
  4. Testing Chalktalk

Creating a LXD container

Let’s create a new LXD container with Ubuntu 16.04 (Xenial, therefore ubuntu:x), called mynodejs. Feel free to use something more descriptive, like chalktalk.

$ lxc launch ubuntu:x mynodejs
Creating mynodejs
Starting mynodejs
$ lxc list -c ns4
+----------+---------+----------------------+
| NAME     | STATE   | IPV4                 |
+----------+---------+----------------------+
| mynodejs | RUNNING | 10.52.252.246 (eth0) |
+---------------+----+----------------------+

Note down the IP address of the container. We need it when we test Chalktalk at the end of this howto.

Then, we get  a shell into the LXD container.

$ lxc exec mynodejs -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mynodejs:~$

We executed in the mynodejs container the command sudo –login –user ubuntu. It gives as a login shell for the non-root default user ubuntu, that is always found in Ubuntu container images.

Installing Node.js in the LXD container

Here are the instructions to install Node.js 8 on Ubuntu.

ubuntu@mynodejs:~$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
## Installing the NodeSource Node.js v8.x repo...
...
## Run `apt-get install nodejs` (as root) to install Node.js v8.x and npm

ubuntu@mynodejs:~$ sudo apt-get install -y nodejs
...
Setting up python (2.7.11-1) ...
Setting up nodejs (8.9.0-1nodesource1) ...
ubuntu@mynodejs:~$ sudo apt-get install -y build-essential
...
ubuntu@mynodejs:~$

The curl | sh command makes you want to install in a container rather than on your desktop. Just saying. We also install the build-essential meta-package because it is needed when you install packages on top of Node.js.

Installing Chalktalk

We follow the installation instructions for Chalktalk to clone the repository. We use depth=1 to get a shallow copy (18MB) instead of the full repository (100MB).

ubuntu@mynodejs:~$ git clone https://github.com/kenperlin/chalktalk.git --depth=1
Cloning into 'chalktalk'...
remote: Counting objects: 195, done.
remote: Compressing objects: 100% (190/190), done.
remote: Total 195 (delta 5), reused 51 (delta 2), pack-reused 0
Receiving objects: 100% (195/195), 8.46 MiB | 8.34 MiB/s, done.
Resolving deltas: 100% (5/5), done.
Checking connectivity... done.
ubuntu@mynodejs:~$ cd chalktalk/server/
ubuntu@mynodejs:~/chalktalk/server$ npm install
> bufferutil@1.2.1 install /home/ubuntu/chalktalk/server/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /home/ubuntu/chalktalk/server/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
npm WARN chalktalk@0.0.1 No description
npm WARN chalktalk@0.0.1 No repository field.
npm WARN chalktalk@0.0.1 No license field.

added 5 packages in 3.091s
ubuntu@mynodejs:~/chalktalk/server$ cd ..
ubuntu@mynodejs:~/chalktalk$

Trying out Chalktalk

Let’s run the app, using the following command (you need to be in the chalktalk directory):

ubuntu@mynodejs:~/chalktalk$ node server/main.js 
HTTP server listening on port 11235

Now, we are ready to try out Chalktalk! Use your favorite browser and visit http://10.52.252.246/11235 (replace according the IP address of your container).

You are presented with a blackboard! You use the mouse to sketch objects, then click on your sketch to get Chalktalk to try to identify it and create the actually responsive object.

It makes more sense if you watch the following video,

And this is how you can cleanly install Node.js into a LXD container. Once you are done testing, you can delete the container and it’s gone.

 

Update #1:

Here is an actual example. The pendulum responds to the mouse and we can nudge it.

The number can be incremented or decremented using the mouse; do an UP gesture to increment, and a DOWN gesture to decrement. You can also multiply/divide by 10 if you do a LEFT/RIGHT gesture.

Each type of object has a corresponding sketch in Chalktalk. In the source there is a directory with the sketches, with the ability to add new sketches.

 

Update #2:

Let’s see how to get this Node.js app to autostart when the LXD container is started. We are going to use systemd to control the autostart feature.

First, let’s create a script, called chalktalk-service.sh, that starts the Node.js app:

ubuntu@mynodejs:~$ pwd
/home/ubuntu
ubuntu@mynodejs:~$ cat chalktalk-service.sh 
#!/bin/sh
cd /home/ubuntu/chalktalk
/usr/bin/node /home/ubuntu/chalktalk/server/main.js
ubuntu@mynodejs:~$

We have created a script instead of running the command directly. The reason is that chalktalk uses relative pathes and we need to chdir to the appropriate directory first so it works. You may want to contact the author to attend to this.

Then, we create a service file for Chalktalk.

ubuntu@mynodejs:~$ cat /lib/systemd/system/chalktalk.service 
[Unit]
Description=Chalktalk - your live blackboard
Documentation=https://github.com/kenperlin/chalktalk/wiki
After=network.target
After=network-online.target

[Service]
Type=simple
User=ubuntu
Group=ubuntu
ExecStart=/home/ubuntu/chalktalk-service.sh
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=chalktalk

[Install]
WantedBy=multi-user.target

ubuntu@mynodejs:~$

We have configured to get Chalktalk to autostart once the network is up and online. The service gets to become user ubuntu, then run the command. Any output or error goes to syslog, using the chalktalk syslog identifier.

Let’s get Systemd to learn about this new service file.

ubuntu@mynodejs:~$ sudo systemctl daemon-reload
ubuntu@mynodejs:~$

Let’s enable the Chalktalk service, then start it.

ubuntu@mynodejs:~$ sudo systemctl enable chalktalk.server
ubuntu@mynodejs:~$ sudo systemctl start chalktalk.service
ubuntu@mynodejs:~$

Now we verify whether it works.

Let’s restart the container and test whether the Chalktalk actually autostarted!

ubuntu@mynodejs:~$ logout

myusername@mycomputer /home/myusername:~$ lxc restart mynodejs
myusername@mycomputer /home/myusername:~$

That’s it!

Permanent link to this article: https://blog.simos.info/how-to-install-a-node-js-app-in-a-lxd-container/

Nov 01 2017

How to run Docker in a LXD container

We are running Ubuntu 16.04 and LXD 2.18 (from the lxd-stable PPA). Let’s get Docker (docker-ce) to run in a container!

General instructions on running Docker (docker.io, from the Ubuntu repositories) in an LXD container can be found at LXD 2.0: Docker in LXD [7/12].

 

First, let’s launch a LXD container in a way that will make it suitable to run Docker in it.

$ lxc launch ubuntu:x docker -c security.nesting=true
Creating docker
Starting docker
$

Here, docker is just the name of the LXD Container. The security.nesting feature is needed because our Docker installation will be a container inside the LXD container.

 

Second, let’s install Docker CE on Ubuntu.

ubuntu@docker:~$ sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...
Reading package lists... Done
ubuntu@docker:~$ sudo apt-get install \
> apt-transport-https \
> ca-certificates \
> curl \
> software-properties-common
Reading package lists... Done
...
The following additional packages will be installed:
 libcurl3-gnutls
The following packages will be upgraded:
 curl libcurl3-gnutls
2 upgraded, 0 newly installed, 0 to remove and 20 not upgraded.
...
ubuntu@docker:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK
ubuntu@docker:~$ sudo apt-key fingerprint 0EBFCD88
pub 4096R/0EBFCD88 2017-02-22
 Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker@docker.com>
sub 4096R/F273FCD8 2017-02-22

ubuntu@docker:~$ sudo add-apt-repository \
> "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
> $(lsb_release -cs) \
> stable"
ubuntu@docker:~$ sudo apt-get update
...
ubuntu@docker:~$ sudo apt-get install docker-ce
...
The following NEW packages will be installed:
 aufs-tools cgroupfs-mount docker-ce libltdl7
0 upgraded, 4 newly installed, 0 to remove and 20 not upgraded.
Need to get 21.2 MB of archives.
After this operation, 100 MB of additional disk space will be used.
...
ubuntu@docker:~$

 

Third, let’s test that Docker is working, by running the hello-world image.

ubuntu@docker:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
5b0f327be733: Pull complete 
Digest: sha256:07d5f7800dfe37b8c2196c7b1c524c33808ce2e0f74e7aa00e603295ca9a0972
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
 executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
 to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

ubuntu@docker:~$

 

Finally, let’s go full inception by running Ubuntu in a Docker container inside an Ubuntu LXD container on Ubuntu 16.04.

ubuntu@docker:~$ sudo docker run -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
ae79f2514705: Pull complete 
5ad56d5fc149: Pull complete 
170e558760e8: Pull complete 
395460e233f5: Pull complete 
6f01dc62e444: Pull complete 
Digest: sha256:506e2d5852de1d7c90d538c5332bd3cc33b9cbd26f6ca653875899c505c82687
Status: Downloaded newer image for ubuntu:latest
root@cc7fe5598b2e:/# exit

 

Before we finish, let’s check what docker info says.

ubuntu@docker:~$ sudo docker info
Containers: 2
 Running: 0
 Paused: 0
 Stopped: 2
Images: 2
Server Version: 17.09.0-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
 apparmor
 seccomp
 Profile: default
Kernel Version: 4.10.0-37-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 4.053 GiB
Name: docker
ID: KEA5:KU1N:MM6U:2ROA:UHJ5:3VRR:3B6C:ZWF4:KTOB:V6OT:ROLI:CAT3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
ubuntu@docker:~$

An issue here is that the Docker storage driver is vfs, instead of aufs or overlay2. The Docker package in the Ubuntu repositories (name: docker.io) has modifications to make it work better with LXD. There are a few questions here on how to get a different storage driver and a few things are still unclear to me.

Permanent link to this article: https://blog.simos.info/how-to-run-docker-in-a-lxd-container/

Oct 31 2017

Online course about LXD containers

If you want to learn about LXD, there is a good online course at LinuxAcademy (warning: referral).

It is the LXC/LXD Deep Dive online course by Chad Miller.

Some of the course details of LXC/LXD Deep Dive course on LinuxAcademy (total time: 3h)

Some of the course details of LXC/LXD Deep Dive course on LinuxAcademy (total time: 3h)

Here is a review of this LXC/LXD online course by bmullan.

There is a 7-day unlimited trial on LinuxAcademy, then $29 per month (bank card or Paypal). If you aim for this, you can take the course for free!

Permanent link to this article: https://blog.simos.info/online-course-about-lxd-containers/

%d bloggers like this: