Update February 2021: See the Notes below for extra steps when trying this on the Raspberry Pi 4.
Updated instructions for LXD 4.5 (September 2020)
LXD 4.5 has added features that make proxy devices more secure in the sense that if something goes wrong on the proxy device, your system is safer. Specifically, the proxy devices are now under AppArmor confinement. In doing so, however, something broke and it was not possible to start anymore GUI/X11 LXD containers. Among the possible workarounds and real solutions, it is better to move to a solution. And the solution is to move forward to LXD 4.6 which has a fix to the AppArmor confinement issue.
Run snap info lxd
to verify which channel you are tracking. In my case, I am tracking the latest/stable
channel, which currently has LXD 4.5 (proxy devices do not work). However, the latest/candidate
channel has LXD 4.6, and Stéphane Graber said it has the fix for proxy devices. Be aware that we are switching to this channel for now, until LXD 4.6 is released as a stable version next week. That is, if you do the following, make a note to come back here (around next Thursday) so that you switch back from the candidate channel to a stable channel (latest/stable
or the 4.6/stable
).
$ snap info lxd
name: lxd
summary: System container manager and API
publisher: Canonical✓
...
tracking: latest/stable
...
channels:
latest/stable: 4.5 2020-09-18 (17299) 71MB -
latest/candidate: 4.6 2020-09-19 (17320) 71MB -
latest/beta: ↑
latest/edge: git-e1fa47b 2020-09-19 (17324) 71MB -
4.6/stable: –
4.6/candidate: 4.6 2020-09-19 (17320) 71MB -
4.6/beta: ↑
4.6/edge: ↑
...
$
Now, we refresh the LXD snap package into the latest/candidate
channel.
$ snap refresh lxd --channel=latest/candidate
lxd (candidate) 4.6 from Canonical✓ refreshed
$
And that’s it. Oh no, it’s updating again.

NOTE: If you have setup the latest/candidate
channel, you should be now switch to the latest/stable
channel. LXD 4.6 has been released now into the stable channel. Use the following command
sudo snap refresh lxd --channel=latest/stable
The post continues…
With LXD you can run system containers, which are similar to virtual machines. Normally, you would use a system container to run network services. But you can also run X11 applications. See the following discussion and come back here. In this post, we further refine and simplify the instructions for the second way to run X applications. Previously I have written several tutorials on this.
LXD GUI profile
Here is the updated LXD profile to setup a LXD container to run X11 application on the host’s X server. Copy the following text and put it in a file, x11.profile. Note that the bold text below (i.e. X1) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.
config: environment.DISPLAY: :0 environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native nvidia.driver.capabilities: all nvidia.runtime: "true" user.user-data: | #cloud-config runcmd: - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf' packages: - x11-apps - mesa-utils - pulseaudio description: GUI LXD profile devices: PASocket1: bind: container connect: unix:/run/user/1000/pulse/native listen: unix:/home/ubuntu/pulse-native security.gid: "1000" security.uid: "1000" uid: "1000" gid: "1000" mode: "0777" type: proxy X0: bind: container connect: unix:@/tmp/.X11-unix/X1 listen: unix:@/tmp/.X11-unix/X0 security.gid: "1000" security.uid: "1000" type: proxy mygpu: type: gpu name: x11 used_by: []
Then, create the profile with the following commands. This creates a profile called x11.
$ lxc profile create x11 Profile x11 created $ cat x11.profile | lxc profile edit x11 $
To create a container, run the following.
lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
To get a shell in the container, run the following.
lxc exec mycontainer -- sudo --user ubuntu --login
Once we get a shell inside the container, you run the diagnostic commands.
$ glxinfo -B name of display: :0 display: :0 screen: 0 direct rendering: Yes OpenGL vendor string: NVIDIA Corporation ... $ nvidia-smi Mon Dec 9 00:00:00 2019 +-------------------------------------------------------------------------+ | NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 | |---------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===========================+======================+======================| ... $ pactl info Server String: unix:/home/ubuntu/pulse-native Library Protocol Version: 32 Server Protocol Version: 32 Is Local: yes Client Index: 43 Tile Size: 65472 User Name: myusername Host Name: mycomputer Server Name: pulseaudio Server Version: 11.1 Default Sample Specification: s16le 2ch 44100Hz Default Channel Map: front-left,front-right Default Sink: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1 Default Source: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1.monitor Cookie: f228:e515 $
You can run xclock
which is an Xlib application. If it runs, it means that unaccelerated (standard X11) applications are able to run successfully.
You can run glxgears
which requires OpenGL. If it runs, it means that you can run GPU accelerated software.
You can run paplay
to play audio files. This is the PulseAudio audio play.
If you want to test with Alsa, install alsa-utils
and use aplay
to play audio files.
Explanation
We dissect the LXD profile in pieces.
We set two environment variables in the container. $DISPLAY for X and PULSE_SERVER for PulseAudio. Irrespective of the DISPLAY on the host, the DISPLAY in the container is always mapped to :0. While the PulseAudio Unix socket is often located eunder /var, in this case we put it into the home directory of the non-root account of the container. This will make PulseAudio accessible to snap packages in the container, as long as they support the home interface.
config:
environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
This enables the NVidia runtime with all the capabilities, if such a GPU is available. The text all for the capabilities means that it enables all of compute, display, graphics, utility, video
. If you would rather restrict the capabilities, then graphics
is for running OpenGL applications. And compute
is for CUDA applications. If you do not have an NVidia GPU, then these directly will silently fail.
nvidia.driver.capabilities: all
nvidia.runtime: "true"
Here we use cloud-init
to get the container to perform the following tasks on the first time it starts. The sed
command disables shm support in PulseAudio, which means that it enables the Unix socket support. Additionally, the listed three packages are installed with utilities to test X11 application, X11 OpenGL applications and audio applications.
user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
packages:
- x11-apps
- mesa-utils
- pulseaudio
This command shares the Unix socket of the PulseAudio server on the host to the container. In the container it is /home/ubuntu/pulse-native
. The security configuration refers to the host. The uid, gid and mode refer to the Unix socket in the container. This is a LXD proxy device, and binds into the container, meaning that it makes the host’s Unix socket appear in the container.
devices:
PASocket1:
bind: container
connect: unix:/run/user/1000/pulse/native
listen: unix:/home/ubuntu/pulse-native
security.gid: "1000"
security.uid: "1000"
uid: "1000"
gid: "1000"
mode: "0777"
type: proxy
This part shares the Unix socket of the X server on the host to the container. If $DISPLAY on your host is also :1, then keep the default shown below to X1. Otherwise, adjust the number accordingly. The @
character means that we are using abstract Unix sockets, which means that there is no actual file on the filesystem. Although /tmp/.X11-unix/X0
looks like an absolute path, it is just a name. We could have used myx11socket
instead, for example. We use an abstract Unix socket so that it is also accessible by snap packages. We would have used an abstract Unix socket for PulseAudio, but PulseAudio does not support them. The security uid and gid refer to the host.
X0:
bind: container
connect: unix:@/tmp/.X11-unix/X1
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy
We make available the host’s GPU to the container. We do not need to specify explicitly which GPU we are using if we only have a single GPU.
mygpu:
type: gpu
Installing software
You can install any graphical software. For example,
sudo apt-get install -y firefox
Then, run as usual.
firefox

Creating a shortcut for an X11 application in a LXD container
When we are running X11 applications from inside a LXD container, it is handy if we could have a .desktop
on the host, and use it to launch the X11 application from the container. Below we do this. We perform the following on the host.
First, select an icon for the X11 application and place it into a sensible directory. In our case, we place it into ~/.local/share/icons/. You can find an appropriate icon from the resource files of the installed X11 application in the container. If we assume that the container is called steam
and the appropriate icon is /home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png
, then we can copy this icon to the ~/.local/share/icons/
folder on the host with the following command.
lxc file pull steam/home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png ~/.local/share/icons/
Then, paste the following in a text editor and save it as a file with the .desktop
extension. For this example, we select steam.desktop. Fill in appropriately the Name, Comment, Exec command line and Icon.
[Desktop Entry] Name=Steam Comment=Play games on Steam Exec=lxc exec steam -- sudo --user ubuntu --login steam Icon=/home/user/.local/share/icons/steam_home.png Terminal=false Type=Application Categories=Game;
Finally, move the desktop file into the ~/.local/share/applications directory.
mv steam.desktop ~/.local/share/applications/
We can then look for the application on the host and place the icon on the launcher.
Notes
It has been reported that on the Raspberry Pi 4, using Ubuntu for both the host and the container, you get the following error. This error is on permission denied when trying to run GPU-accelerated applications in an (unprivileged) container as user ubuntu
.
libGL error: failed to create dri screen
libGL error: failed to load driver: vc4
libGL error: failed to open /dev/dri/card1: Permission denied
libGL error: failed to open /dev/dri/card1: Permission denied
libGL error: failed to load driver: vc4
The device files in /dev/dri/
are owned by the Unix group video
. The corresponding files inside the container on PC are owned by root
and it works. However, on the RPi4, apparently you need to change the permissions for those files so that in the container, they are owned by the Unix group video
. LXD has a field for the gpu
device to set the group.
Therefore, change the following fragment of the LXD profile x11
from this
mygpu:
type: gpu
to include the gid
propety with value 44 (the numeric value for the video
Unix group, as shown in /etc/groups
.
mygpu:
type: gpu
gid: 44
Conclusion
This is the latest iteration of instructions on running GUI or X11 applications and having them appear on the host’s X server.
Note that the applications in the container have full access to the X server (due to how the X server works as there are no access controls). Do not run malicious or untrusted software in the container.
121 comments
3 pings
Skip to comment form
Wanted to leave a quick comment and express my appreciation for your work on running GUI apps in LXD. For me, this is the apex web development environment.
Previously I’d use sshfs or NFS and tried various sync apps and auto ftp apps so I could easily access source code files in the container from GUI apps on the host. Primarily my editor (atom or visual studio code), GUI diff (meld), a browser, and the occasional Postman or filezilla needed access to the files on the container. Always a hassle to setup and get working even with a few scripts that did most of the work.
With your guide I created two scripts (one for the host and one for the container) that team up to launch a LXD container with a profile that has the GUI configuration, run apt on the container to install a bunch of packages needed for development (node, npm, yarn, FireFox developer edition, vscodium, gnome, etc), then from the host it execs firefox developer edition, vscodium, and a couple gnome terminals.
GUI apps executing in the container, displaying on the host, and file operations (like File->Open) access the container filesystem and not the host filesystem. So, not only can this launch fully isolated dev environments, but launches fully isolated GUI apps too!
Here is the unanticipated bonus, the container name is automatically added to the the title bar of each GUI app so no confusion as to which editor or terminal is running on which container.
Example, a host running VSCodium on test and dev containers, in the title bars you’ll have: “VSCodium (on container-dev)” for the dev container and “VSCodium (on container-test)” for the testing container.
Author
Many thanks Bob for the feedback!
Note that while you get filesystem isolation and full GPU acceleration, you do not get X11 isolation. This is a remnant of how X11 works.
So what if im running headless server , should i just install xserver on the host or should i just install it in every container ?
Author
This post shows how to make GUI applications in containers piggy-back on the X server of the host. The X server on the host things that those GUI applications are running on the host but you tricked them.
Now, if you have a headless server, how do you plan to view the output of a GUI application that runs in a container? I assume you will be using something based on VNC, and viewing the output on your own computer. Such a task has always been possible, even if you SSH with
ssh -X
to that remote system. But you do not get any hardware acceleration. In that case, you would need to use something like https://www.virtualgl.org/ so that the application runs in the container, gets hardware-accelerated from the host’s GPU, and you get the output on your remote computer.Love it – thought I’m running Proxmox, which comes with its own LXD / LXC implementation, I believe PVEAM is the CLI tools they use, but its paramters differe from the LXC command… any idea how to make this work?
Author
Thanks!
I am not familiar with Proxmox. If you can run Docker in Proxmox, you can have a look at https://github.com/mviereck/x11docker
Author
Hi!
The instructions here show how to trick your host’s X11 server so that GUI applications that run in the container, will appear for the host to run on the host.
The reason for doing this, is to be able to use full GPU acceleration so that we play games or other software that requires said acceleration.
If you do not require GPU acceleration, there are other solutions that are even better. I suggest to look into X2Go. It’s so good that it gives a full and proper Linux desktop of your favorite Linux distribution. The developers of Proxmox use X2Go.
Have you heard of anyone having problems with this on LXD version 4.18? I keep getting an error from existing containers and new ones that I make.
lxc gui-test 20210907205512.320 WARN conf – conf.c:lxc_map_ids:3389 – newuidmap binary is missing
lxc gui-test 20210907205512.321 WARN conf – conf.c:lxc_map_ids:3395 – newgidmap binary is missing
lxc gui-test 20210907205512.323 WARN conf – conf.c:lxc_map_ids:3389 – newuidmap binary is missing
lxc gui-test 20210907205512.323 WARN conf – conf.c:lxc_map_ids:3395 – newgidmap binary is missing
lxc gui-test 20210907205512.325 WARN cgfsng – cgroups/cgfsng.c:fchowmodat:1293 – No such file or directory – Failed to fchownat(43, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc gui-test 20210907205512.734 ERROR conf – conf.c:run_buffer:323 – Script exited with status 1
lxc gui-test 20210907205512.734 ERROR conf – conf.c:lxc_setup:4129 – Failed to run mount hooks
lxc gui-test 20210907205512.734 ERROR start – start.c:do_start:1291 – Failed to setup container “gui-test”
lxc gui-test 20210907205512.734 ERROR sync – sync.c:sync_wait:36 – An error occurred in another process (expected sequence number 4)
lxc gui-test 20210907205512.748 WARN network – network.c:lxc_delete_network_priv:3622 – Failed to rename interface with index 0 from “eth0” to its initial name “veth5f7c4ac0”
lxc gui-test 20210907205512.748 ERROR start – start.c:__lxc_start:2053 – Failed to spawn container “gui-test”
lxc gui-test 20210907205512.748 ERROR lxccontainer – lxccontainer.c:wait_on_daemonized_start:868 – Received container state “ABORTING” instead of “RUNNING”
lxc gui-test 20210907205512.748 WARN start – start.c:lxc_abort:1050 – No such process – Failed to send SIGKILL via pidfd 44 for process 511727
lxc gui-test 20210907205517.878 WARN conf – conf.c:lxc_map_ids:3389 – newuidmap binary is missing
lxc gui-test 20210907205517.878 WARN conf – conf.c:lxc_map_ids:3395 – newgidmap binary is missing
lxc 20210907205517.930 ERROR af_unix – af_unix.c:lxc_abstract_unix_recv_fds_iov:220 – Connection reset by peer – Failed to receive response
lxc 20210907205517.930 ERROR commands – commands.c:lxc_cmd_rsp_recv_fds:129 – Failed to receive file descriptors
How do I create this as a privileged container? It fails to start when I use security.privileged=true
Also I’m having issues getting vulkan to work, does it not work with nvidia?
Thank you for your article. I have a couple of issues with the retroarch snap I was wondering if you could help me solve. First, if I run the snap, I get a permission error for /run/user/1000:
$ /snap/bin/retroarch
mkdir: cannot create directory ‘/run/user/1000’: Permission denied
I managed to workaround this issue by manually creating this path in the container:
$ sudo mkdir -p /run/user/1000/snap.retroarch
But afterwards, when I try to run, I get an X Error:
X Error: BadValue
Request Major code 151 (GLX)
Request Minor code 24 ()
Value 0x0
Error Serial #46
Current Serial #47
Oddly, if I run the absolute binary, then retroarch starts successfully:
$ /snap/retroarch/current/usr/local/bin/retroarch
So I think the snap confinement is not receiving something properly. Any ideas?
Thanks.
Hello Simos,
Thank you very much for all you do, my question is very simple, I have LXD 4.21 server running in a laptop dell, no special Nvidia card or GPU, nothing of that. I created an LXC vm and installed the xubuntu minimal, that gave me the essential GUI on the LXC vm I see xfce running through the nomachine application, similar to x2go, however, when I check the resolution of the LXC vm the max I can set up is 800×600 is there a way for me to improve this? Thank you and I look forward to any suggestions.
Sincerely,
I had the same issue which I resolved by removing the nvidia lines:
nvidia.driver.capabilities: all
nvidia.runtime: “true”
(I don’t have the Nvidia hardware)
Hi Simos,
Just wanted to leave a testimonial along with a humble request:
I’ve almost completely switched to using LXD containers for all of my GUI apps. The containment is life-changing, providing not only enhanced security and recovery, but simple and easy portability and redundancy. I cannot thank you enough for setting me out on this path, which I could not have done without your exceptional tutorials.
My request is dictated by the steady evolution of Ubuntu. It is now clear that Ubuntu will deprecate X.org and Pulseaudio in favour of Wayland and Pipewire very soon. I would hate to be left in the dark having no idea how to deal with these new plaforms. Would it be possible for you to do a write‑up detailing the new configs needed?
[…] Update February 2021: This tutorial shows how to install manually the shared libraries for OpenGL in the container. The user needs to make sure that the library versions in the container matches those of the host. That was not very good. Now, LXD supports the NVidia container runtime, which is provided by NVidia and has the matching shared library versions from the host. Please, follow instead the new post on How to Run X11 Applications in a LXD container. […]