How to view the files of your LXD container from the host

You are creating LXD containers and you enter a container (lxc shell, lxc exec or lxc console) in order to view or modify the files of that container. But can you access the filesystem of the container from the host?

If you use the LXD snap package, LXD mounts the filesystem of the container in a subdirectory under /var/snap/lxd/common/lxd/storage-pools/lxd/containers/. If you run the following, you will see a list of your containers. Each container can be found in a subdirectory, named after each container name.

$ sudo -i
# ls -l /var/snap/lxd/common/lxd/storage-pools/lxd/containers/

But the container directories are empty

Most likely (if you do not use a dir storage pool) the container subdirectories are empty. The container is running but the subdirectory is empty?

This happens due to how Linux namespaces work, and LXD uses them. You would need to enter the namespace of the LXD service in order to view the container files from the host. Here is how it’s done. With -t we specify the target, the process ID of the LXD service. With -m we specify that we want to enter the mount namespace of this process.

$ sudo nsenter -t $(cat /var/snap/lxd/common/ -m
[sudo] password for myusername:

On other terminal, let’s launch a container. You will be investigating this container.

$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

Now, let’s look at the files of this container. There’s a backup.yaml, which is similar to the output of the command lxc config show mycontainer --expanded but has additional keys for pool, volume, snapshots. This file is important if you lose your LXD database. The metadata.yaml together with the templates/ directory is the description of how the container was parametarized. In an Ubuntu container, the defaults are used, except for the networking at templates/cloud-init-network.tpl where it setups a minimal default configuration for eth0 to obtain a DHCP lease from the network. And last is rootfs/, which is the location of the filesystem of the container.

# cd /var/snap/lxd/common/lxd/storage-pools/lxd/containers/
# ls -l mycontainer/
total 6
-r--------  1 root    root    2952 Feb  3 17:07 backup.yaml
-rw-r--r--  1 root    root    1050 Jan 29 23:55 metadata.yaml
drwxr-xr-x 22 1000000 1000000   22 Jan 29 23:19 rootfs
drwxr-xr-x  2 root    root       7 Jan 29 23:55 templates

The rootfs/directory has UID/GID of 100000/100000. The files inside the root filesystem of the container have IDs that are shifted by 100000 from the typical range 0-65534. That is, the files inside the container will have ID that range from 100000 to 165534. The root account in the container will have real UID 100000 but will appear as 0 in the container. Here is the list of the root directory of the container, according to the host.

# ls -l /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer/rootfs/
total 41
drwxr-xr-x  2 1000000 1000000 172 Jan 29 23:17 bin
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:19 boot
drwxr-xr-x  4 1000000 1000000  15 Jan 29 23:17 dev
drwxr-xr-x 88 1000000 1000000 176 Feb  3 17:07 etc
drwxr-xr-x  3 1000000 1000000   3 Feb  3 17:07 home
drwxr-xr-x 20 1000000 1000000  23 Jan 29 23:16 lib
drwxr-xr-x  2 1000000 1000000   3 Jan 29 23:15 lib64
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 media
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 mnt
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 opt
drwxr-xr-x  2 1000000 1000000   2 Apr 24  2018 proc
drwx------  3 1000000 1000000   5 Feb  3 17:07 root
drwxr-xr-x  4 1000000 1000000   4 Jan 29 23:19 run
drwxr-xr-x  2 1000000 1000000 221 Jan 29 23:17 sbin
drwxr-xr-x  2 1000000 1000000   3 Feb  3 17:07 snap
drwxr-xr-x  2 1000000 1000000   2 Jan 29 23:15 srv
drwxr-xr-x  2 1000000 1000000   2 Apr 24  2018 sys
drwxrwxrwt  8 1000000 1000000   8 Feb  3 17:08 tmp
drwxr-xr-x 10 1000000 1000000  10 Jan 29 23:15 usr
drwxr-xr-x 13 1000000 1000000  15 Jan 29 23:17 var

If we create a file in the container’s rootfs from the host, how will it look from within the container? Let’s try.

root@mycomputer:/var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer/rootfs# touch mytest.txt

Then, from the container we run the following. The file with invalid UID and GID per container, appears to have UID nobody and GID nogroup. That is, if you notice in a container too many files owned by nobody, then there is a chance that something went bad with the IDs and requires investigation.

$ lxc shell mycontainer
mesg: ttyname failed: No such device
root@mycontainer:~# ls -l /mytest.txt 
-rw-r--r-- 1 nobody nogroup 0 Feb  3 15:32 /mytest.txt


Error: I did nsenter but still cannot see any files?

If you rebooted your LXD computer and the container has been set not to autostart after boot, then LXD optimizes here and does not mount the container’s rootfs. You can either start the container (so LXD performs the mount for you), or mount manually.

To mount manually a container called mycontainer2, you would run the following

# mount -t zfs lxd/containers/mycontainer2 /var/snap/lxd/common/lxd/storage-pools/lxd/containers/mycontainer2 


We have seen how to enter the mount namespace of the LXD service, have a look at the files of the containers, and also manually perform this mount, if needed.

Permanent link to this article:


Skip to comment form

    • Brudi on May 9, 2020 at 13:32
    • Reply

    is there a way to automate such process (except crontab ofc) ? so that everytime I have to reboot either my container or my LXD host I dont have to

    1. To view the files of the container from the host, you end up switching namespace.
      Therefore, it is a per-process operation, where it switches the namespace into that of the container.

      That is, if you are writing a script on the host to look into a container’s files, you would need to adapt the script to run first the sudo nsenter command shown in the post.
      If you want to change namespace from your shell, you can also create a simple shell alias for this purpose.

    • matt on January 29, 2021 at 08:49
    • Reply

    Is it possible to copy all the files of a container with rsync? If run the sudo nsenter command, I can’t execute rsync because the binary is missing.

    1. Hi!

      When you use nsenter, you enter the namespace of the LXD snap package. In this namespace, the /usr/bin/ and other system directories are from the snapd core image. For example,

      # df -h /usr/bin
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/loop21      56M   56M     0 100% /

      Therefore, whichever executables you have, are from a tiny list of the Ubuntu runtime.

      So, what do you do?

      The LXD snap has a copy of rsync for its own purposes, /snap/lxd/current/bin/rsync.

      If you try to run it, it will complain about missing libraries. Run it as follows:

      LD_LIBRARY_PATH=/snap/lxd/current/lib/x86_64-linux-gnu/ /snap/lxd/current/bin/rsync
    • matt on January 29, 2021 at 14:51
    • Reply


    Bonus tip for other users: If you want to copy the files into another zfs filesystem, you can find your ZPOOLs of your host at /var/lib/snapd/hostfs/

    • Kamzar on February 20, 2021 at 11:57
    • Reply

    Thanks for this helpful insight.

    In legacy LXD installation (apt install), container root folder was in plain site and accessible /var/lib/lxd/storage-pools/pool/container/root

    But lxd snap package, when utilizing a zfs pool, creates for each container a zfs slice with zfs properties:

    mountpoint: /var/snap/lxd/common/lxd/storage-pools/pool/containers/x
    mounted: no

    Just mount it:

    zfs mount yourzfspool/containername

    and access to container:

    cd /var/snap/lxd/common/lxd/storage-pools/pool/containers/x

    Keep the container mounted, does not disturb any further container transactions, though when deleting the container, lxd is having troubles to purge the mounted zfs slice.
    So better manually unmount:

    zfs unmount yourzfspool/containername

    This process can be used for automated tasks:

    zfs mount container
    .. process
    zfs unmount container
    1. Thanks Kamzar.

      Indeed, you can access the storage pool files directly using the tools of the storage pool.
      You can also use these commands for recovery purposes when the snap package for LXD fails to start.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: