LXD is the pure-container hypervisor that is pre-installed in Ubuntu 16.04 (or newer) and also available in other GNU/Linux distributions.
When you first configure LXD, you need to make important decisions. Decisions that relate to where you are storing the containers, how big that space will be and also how to set up networking.
In this post we are going to see how to properly clean up LXD with the aim to initialize it again (lxd init).
If you haven’t used LXD at all, have a look at how to set up LXD on your desktop and come back in order to reinitialize together.
Before initializing again, let’s have a look as to what is going on on our system.
What LXD packages have we got installed?
LXD comes in two packages, the lxd package for the hypervisor and the lxd-client for the client utility. There is an extra package, lxd-tools, however this one is not essential at all.
Let’s check which versions we have installed.
$ apt policy lxd lxd-client lxd: Installed: 2.0.9-0ubuntu1~16.04.2 Candidate: 2.0.9-0ubuntu1~16.04.2 Version table: *** 2.0.9-0ubuntu1~16.04.2 500 500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 2.0.2-0ubuntu1~16.04.1 500 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages lxd-client: Installed: 2.0.9-0ubuntu1~16.04.2 Candidate: 2.0.9-0ubuntu1~16.04.2 Version table: *** 2.0.9-0ubuntu1~16.04.2 500 500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 2.0.2-0ubuntu1~16.04.1 500 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages $ _
I am running Ubuntu 16.04 LTS, currently updated to 16.04.2. The current version of the LXD package is 2.0.9-0ubuntu1~16.04.2. You can see that there is an older version, which was a security update. And an even older version, version 2.0.0, which was the initial version that Ubuntu 16.04 was released with.
There is a PPA that has even more recent versions of LXD (currently at version 2.11), however as it is shown above, we do not have that one enabled here.
We will be uninstalling in a bit those two packages. There is an option to simply uninstall but also to uninstall with –purge. We need to figure out what LXD means in terms of installed files, in order to select whether to purge or not.
How are the containers stored and where are they located?
The containers can be stored either
- in subdirectories on the root (/) filesystem. Located at /var/lib/lxd/containers/ You get this when you configure LXD to use the dir storage backend.
- in a loop file that is formatted internally with the ZFS filesystem. Located at /var/lib/lxd/containers/zfs.img (or /var/lib/lxd/containers/disks/ in newer versions). You get this when you configure LX
D to use the zfs storage backend (on a loop file and not a block device). - in a block device (partition) that is formatted with ZFS (or btrfs). You get this when you configure LX
D to use the zfs storage backend (not on a loop file but on a block device).
Let’s see all three cases!
In the following we assume we have a container called mytest, which is running.
$ lxc list +--------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+----------------------+------+------------+-----------+ | mytest | RUNNING | 10.177.65.166 (eth0) | | PERSISTENT | 0 | +--------+---------+----------------------+------+------------+-----------+
Let’s see how it looks depending on the type of the storage backend.
Storage backend: dir
Let’s see the config!
$ lxc config show config: {} $ _
We are looking for configuration that refers to storage. We do not see any, therefore, this installation uses the dir storage backend.
Where are the files for the mytest container stored?
$ sudo ls -l /var/lib/lxd/containers/ total 8 drwxr-xr-x+ 4 165536 165536 4096 Μάρ 15 23:28 mytest $ sudo ls -l /var/lib/lxd/containers/mytest/ total 12 -rw-r--r-- 1 root root 1566 Μάρ 8 05:16 metadata.yaml drwxr-xr-x 22 165536 165536 4096 Μάρ 15 23:28 rootfs drwxr-xr-x 2 root root 4096 Μάρ 8 05:16 templates $ _
Each container can be find in /var/lib/lxd/containers/, in a subdirectory with the same name as the container name.
Inside there, in the rootfs/ directory we can find the filesystem of the container.
Storage backend: zfs
Let’s see how the config looks like!
$ lxc config show config: storage.zfs_pool_name: lxd $
Okay, we are using ZFS for the storage backend. It is not clear yet whether we are using a loop file or a block device. How do we find that? With zpool status.
$ sudo zpool status pool: lxd state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM lxd ONLINE 0 0 0 /var/lib/lxd/zfs.img ONLINE 0 0 0 errors: No known data errors
In the above example, the ZFS filesystem is stored in a loop file, located at /var/lib/lxd/zfs.img
However, in the following example,
$ sudo zpool status pool: lxd state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM lxd ONLINE 0 0 0 sda8 ONLINE 0 0 0 errors: No known data errors
the ZFS filesystem is located in a block device, in /dev/sda8.
Here is how the container files look like with ZFS (either on a loop file or on a block device),
$ sudo ls -l /var/lib/lxd/containers/ total 5 lrwxrwxrwx 1 root root 34 Mar 15 23:43 mytest -> /var/lib/lxd/containers/mytest.zfs drwxr-xr-x 4 165536 165536 5 Mar 15 23:43 mytest.zfs $ sudo ls -l /var/lib/lxd/containers/mytest/ total 4 -rw-r--r-- 1 root root 1566 Mar 8 05:16 metadata.yaml drwxr-xr-x 22 165536 165536 22 Mar 15 23:43 rootfs drwxr-xr-x 2 root root 8 Mar 8 05:16 templates $ mount | grep mytest.zfs lxd/containers/mytest on /var/lib/lxd/containers/mytest.zfs type zfs (rw,relatime,xattr,noacl) $ _
How to clean up the storage backend
When we try to run lxd init without cleaning up our storage, we get the following error,
$ lxd init LXD init cannot be used at this time. +However if all you want to do is reconfigure the network, +you can still do so by running "sudo dpkg-reconfigure -p medium lxd" error: You have existing containers or images. lxd init requires an empty LXD. $ _
Yep, we need to clean up both the containers and any cached images.
Cleaning up the containers
We are going to list the containers, then stop them, and finally delete them. Until the list is empty.
$ lxc list +--------+---------+----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+----------------------+------+------------+-----------+ | mytest | RUNNING | 10.177.65.205 (eth0) | | PERSISTENT | 0 | +--------+---------+----------------------+------+------------+-----------+ $ lxc stop mytest $ lxc delete mytest $ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+ $ _
It’s empty now!
Cleaning up the images
We are going to list the cached images, then delete them. Until the list is empty!
$ lxc image list +-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+ | | 2cab90c0c342 | no | ubuntu 16.04 LTS amd64 (release) (20170307) | x86_64 | 146.32MB | Mar 15, 2017 at 10:02pm (UTC) | +-------+--------------+--------+---------------------------------------------+--------+----------+-------------------------------+ $ lxc image delete 2cab90c0c342 $ lxc image list +-------+-------------+--------+-------------+------+------+-------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------+-------------+--------+-------------+------+------+-------------+ $ _
Clearing up the ZFS storage
If we are using ZFS, here is how we clear up the ZFS pool.
First, we need to remove any reference of the ZFS pool from LXD. We just need to unset the configuration directive storage.zfs_pool_name.
$ lxc config show
config:
storage.zfs_pool_name: lxd
$ lxc config unset storage.zfs_pool_name
$ lxc config show
config: {}
$ _
Then, we can destroy the ZFS pool.
$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 2,78G 664K 2,78G - 7% 0% 1.00x ONLINE -
$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
lxd 544K 2,69G 19K none
lxd/containers 19K 2,69G 19K none
lxd/images 19K 2,69G 19K none
$ sudo zpool destroy lxd
$ sudo zpool list
no pools available
$ sudo zfs list
no datasets available
$ _
Running “lxd init” again
At this point we are able to run lxd init again in order to initialize LXD again.
Common errors
Here is a collection of errors that I encountered when running lxd init. These errors should appear if we did not clean up properly as described earlier in this post.
I had been trying lots of variations, including different versions of LXD. You probably need to try hard to get these errors.
error: Provided ZFS pool (or dataset) isn’t empty
Here is how it looks:
$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? no
error: Provided ZFS pool (or dataset) isn't empty
Exit 1
Whaaaat??? Something is wrong. The ZFS pool is not empty? What’s inside the ZFS pool?
$ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT lxd 642K 14,4G 19K none lxd/containers 19K 14,4G 19K none lxd/images 19K 14,4G 19K none
Okay, it’s just the two volumes that are left over. Let’s erase them!
$ sudo zfs destroy lxd/containers $ sudo zfs destroy lxd/images $ sudo zfs list NAME USED AVAIL REFER MOUNTPOINT lxd 349K 14,4G 19K none $ _
Nice! Let’s run now lxd init.
$ sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: lxd Would you like LXD to be available over the network (yes/no) [default=no]? no Do you want to configure the LXD bridge (yes/no) [default=yes]? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured. $ _
That’s it! LXD is freshly configured!
error: Failed to create the ZFS pool: cannot create ‘lxd’: pool already exists
Here is how it looks,
$ sudo lxd init Name of the storage backend to use (dir or zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Name of the new ZFS pool [default=lxd]: Would you like to use an existing block device (yes/no) [default=no]? yes Path to the existing block device: /dev/sdb9 Would you like LXD to be available over the network (yes/no) [default=no]? Do you want to configure the LXD bridge (yes/no) [default=yes]? error: Failed to create the ZFS pool: cannot create 'lxd': pool already exists $ _
Here we forgot to destroy the ZFS pool called lxd. See earlier in this post on how to destroy the pool so that lxd init can recreate it.
Permission denied, are you in the lxd group?
This is a common error when you first install the lxd package because your non-root account needs to log out and log in again in order to enable the membership to the lxd Unix group.
However, we got this error when we were casually uninstalling and reinstalling the lxd package, and doing nasty tests. Let’s see more details.
$ lxc list Permission denied, are you in the lxd group? Exit 1 $ groups myusername myusername : myusername adm cdrom sudo plugdev lpadmin lxd $ newgrp lxd $ lxc list Permission denied, are you in the lxd group? Exit 1 $ _
Whaaat!?! Permission denied and we are asked whether we are in the lxd group? We are members of the lxd group!
Well, the problem is whether the Unix socket that allows non-root users (members of the lxd Unix group) to access LXD has proper ownership.
$ ls -l /var/lib/lxd/unix.socket srw-rw---- 1 root root 0 Mar 15 23:20 /var/lib/lxd/unix.socket $ sudo chown :lxd /var/lib/lxd/unix.socket $ ls -l /var/lib/lxd/unix.socket srw-rw---- 1 root lxd 0 Mar 15 23:20 /var/lib/lxd/unix.socket $ lxc list +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+ $ _
The group of the Unix socket /var/lib/lxd/unix.socket was not set to the proper value lxd, therefore we set it ourselves. And then the LXD commands work just fine with our non-root user account!
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open ‘lxd’: dataset does not exist
Here is a tricky error.
$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]: lxd2
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Do you want to configure the LXD bridge (yes/no) [default=yes]?
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
$ _
We cleaned up the ZFS pool just fine and we are running lxd init. But we got an error relating to the lxd pool that is already gone. Whaat?!?
What happened is that in this case, we forgot to FIRST unset the configuration option in LXD regarding the ZFS pool. We just forget to run lxc config unset storage.zfs_pool_name.
It’s fine then, let’s unset it now and go on with life.
$ lxc config unset storage.zfs_pool_name
error: Error checking if a pool is already in use: Failed to list ZFS filesystems: cannot open 'lxd': dataset does not exist
Exit 1
$ _
Alright, we really messed up!
There are two ways to move forward. One, to rm -fr /var/lib/lxd/ and start over.
The other way is to edit the /var/lib/lxd/lxd.db Sqlite3 file and change the configuration setting from there. Here is how it works,
First, install the sqlitebrowser package and run sudo sqlitebrowser /var/lib/lxd/lxd.db
Second, get to the config table in sqlitebrowser as shown below.
Third, double-click on the value field (which as shown, says lxd) and clear it so it is shown as empty.
Fourth, click on File→Close Database and select to save the database. Let’s see now!
$ lxc config show
config:
storage.zfs_pool_name: lxd
What?
Fifth, we need to start the LXD service so that LXD will read again the configuration.
$ sudo systemctl restart lxd.service $ lxc config show config: {} $ _
That’s it! We are good to go!
9 comments
1 pings
Skip to comment form
Hi..
For example, I have lxd with zfs running on system A. If system A crash/get hardware failure, can I move zfs pool from system A to a new system B and get lxd run using that zfs pool and continue all container in it? Thanks.
Author
The full state of a LXD installation is stored in /var/lib/lxd.
Specifically, /var/lib/lxd/lxd.db is an SQLite database with information about each container along with the information about the storage pool (ZFS).
The easy way would be to
1. stop the LXD service on A
2. stop the (empty) LXD server on B
3. copy over /var/lib/lxd/ from A to the same location on B.
4. start the LXD service on B.
If you need to make individual changes, you may edit /var/lib/lxd/lxd.db using the program “sqlitebrowser”.
Hi Simos,
Thanks for the answer.
Hello Simos,
Thank you for a great article!
Very useful
Hi Simos, thanks for your work. This article was usefull for me.
Excellent article, thanks! Trying to do this on Ubuntu 20.04 server, will give you a report on success or modifications necessary.
Author
Thanks!
This article was written for LXD 2.0 and now we have LXD 4.x, also snap packages.
The big issue with removing the snap package, is that the
snapd
service will try by default to backup your data. So, you need extra effort to expunge any containers that need to go.Author
This post is one of my oldest. It was about LXD 2.0!
is there an updated post?
[…] References: [1]How to initialize LXD again […]