With LXD you can run system containers, which are similar to virtual machines. Normally, you would use a system container to run network services. But you can also run X11 applications. See the following discussion and come back here. In this post, we further refine and simplify the instructions for the second way to run X applications. Previously I have written several tutorials on this.
LXD GUI profile
Here is the updated LXD profile to setup a LXD container to run X11 application on the host’s X server. Copy the following text and put it in a file, x11.profile. Note that the bold text below (i.e. X1) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.
config: environment.DISPLAY: :0 environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native nvidia.driver.capabilities: all nvidia.runtime: "true" user.user-data: | #cloud-config runcmd: - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf' packages: - x11-apps - mesa-utils - pulseaudio description: GUI LXD profile devices: PASocket1: bind: container connect: unix:/run/user/1000/pulse/native listen: unix:/home/ubuntu/pulse-native security.gid: "1000" security.uid: "1000" uid: "1000" gid: "1000" mode: "0777" type: proxy X0: bind: container connect: unix:@/tmp/.X11-unix/X1 listen: unix:@/tmp/.X11-unix/X0 security.gid: "1000" security.uid: "1000" type: proxy mygpu: type: gpu name: x11 used_by: 
Then, create the profile with the following commands. This creates a profile called x11.
$ lxc profile create x11 Profile x11 created $ cat x11.profile | lxc profile edit x11 $
To create a container, run the following.
lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
To get a shell in the container, run the following.
lxc exec mycontainer -- sudo --user ubuntu --login
Once we get a shell inside the container, you run the diagnostic commands.
$ glxinfo -B name of display: :0 display: :0 screen: 0 direct rendering: Yes OpenGL vendor string: NVIDIA Corporation ... $ nvidia-smi Mon Dec 9 00:00:00 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| ... $ pactl info Server String: unix:/home/ubuntu/pulse-native Library Protocol Version: 32 Server Protocol Version: 32 Is Local: yes Client Index: 43 Tile Size: 65472 User Name: myusername Host Name: mycomputer Server Name: pulseaudio Server Version: 11.1 Default Sample Specification: s16le 2ch 44100Hz Default Channel Map: front-left,front-right Default Sink: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1 Default Source: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1.monitor Cookie: f228:e515 $
You can run
xclock which is an Xlib application. If it runs, it means that unaccelerated (standard X11) applications are able to run successfully.
You can run
glxgears which requires OpenGL. If it runs, it means that you can run GPU accelerated software.
You can run
paplay to play audio files. This is the PulseAudio audio play.
If you want to test with Alsa, install
alsa-utils and use
aplay to play audio files.
We dissect the LXD profile in pieces.
We set two environment variables in the container. $DISPLAY for X and PULSE_SERVER for PulseAudio. Irrespective of the DISPLAY on the host, the DISPLAY in the container is always mapped to :0. While the PulseAudio Unix socket is often located eunder /var, in this case we put it into the home directory of the non-root account of the container. This will make PulseAudio accessible to snap packages in the container, as long as they support the home interface.
This enables the NVidia runtime with all the capabilities, if such a GPU is available. The text all for the capabilities means that it enables all of
compute, display, graphics, utility, video. If you would rather restrict the capabilities, then
graphics is for running OpenGL applications. And
compute is for CUDA applications. If you do not have an NVidia GPU, then these directly will silently fail.
Here we use
cloud-init to get the container to perform the following tasks on the first time it starts. The
sed command disables shm support in PulseAudio, which means that it enables the Unix socket support. Additionally, the listed three packages are installed with utilities to test X11 application, X11 OpenGL applications and audio applications.
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
This command shares the Unix socket of the PulseAudio server on the host to the container. In the container it is
/home/ubuntu/pulse-native. The security configuration refers to the host. The uid, gid and mode refer to the Unix socket in the container. This is a LXD proxy device, and binds into the container, meaning that it makes the host’s Unix socket appear in the container.
This part shares the Unix socket of the X server on the host to the container. If $DISPLAY on your host is also :1, then keep the default shown below to X1. Otherwise, adjust the number accordingly. The
@ character means that we are using abstract Unix sockets, which means that there is no actual file on the filesystem. Although
/tmp/.X11-unix/X0 looks like an absolute path, it is just a name. We could have used
myx11socket instead, for example. We use an abstract Unix socket so that it is also accessible by snap packages. We would have used an abstract Unix socket for PulseAudio, but PulseAudio does not support them. The security uid and gid refer to the host.
We make available the host’s GPU to the container. We do not need to specify explicitly which GPU we are using if we only have a single GPU.
You can install any graphical software. For example,
sudo apt-get install -y firefox
Then, run as usual.
Creating a shortcut for an X11 application in a LXD container
When we are running X11 applications from inside a LXD container, it is handy if we could have a
.desktop on the host, and use it to launch the X11 application from the container. Below we do this. We perform the following on the host.
First, select an icon for the X11 application and place it into a sensible directory. In our case, we place it into ~/.local/share/icons/. You can find an appropriate icon from the resource files of the installed X11 application in the container. If we assume that the container is called
steam and the appropriate icon is
/home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png, then we can copy this icon to the
~/.local/share/icons/ folder on the host with the following command.
lxc file pull steam/home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png ~/.local/share/icons/
Then, paste the following in a text editor and save it as a file with the
.desktop extension. For this example, we select steam.desktop. Fill in appropriately the Name, Comment, Exec command line and Icon.
[Desktop Entry] Name=Steam Comment=Play games on Steam Exec=lxc exec steam -- sudo --user ubuntu --login steam Icon=/home/user/.local/share/icons/steam_home.png Terminal=false Type=Application Categories=Game;
Finally, move the desktop file into the ~/.local/share/applications directory.
mv steam.desktop ~/.local/share/applications/
We can then look for the application on the host and place the icon on the launcher.
This is the latest iteration of instructions on running GUI or X11 applications and having them appear on the host’s X server.
Note that the applications in the container have full access to the X server (due to how the X server works as there are no access controls). Do not run malicious or untrusted software in the container.