Running X11 software in LXD containers

Update February 2021: See the Notes below for extra steps when trying this on the Raspberry Pi 4.

Updated instructions for LXD 4.5 (September 2020)

LXD 4.5 has added features that make proxy devices more secure in the sense that if something goes wrong on the proxy device, your system is safer. Specifically, the proxy devices are now under AppArmor confinement. In doing so, however, something broke and it was not possible to start anymore GUI/X11 LXD containers. Among the possible workarounds and real solutions, it is better to move to a solution. And the solution is to move forward to LXD 4.6 which has a fix to the AppArmor confinement issue.

Run snap info lxd to verify which channel you are tracking. In my case, I am tracking the latest/stable channel, which currently has LXD 4.5 (proxy devices do not work). However, the latest/candidate channel has LXD 4.6, and Stéphane Graber said it has the fix for proxy devices. Be aware that we are switching to this channel for now, until LXD 4.6 is released as a stable version next week. That is, if you do the following, make a note to come back here (around next Thursday) so that you switch back from the candidate channel to a stable channel (latest/stable or the 4.6/stable).

$ snap info lxd
name:      lxd
summary:   System container manager and API
publisher: Canonical✓
...
tracking:     latest/stable
...
channels:
  latest/stable:    4.5         2020-09-18 (17299) 71MB -
  latest/candidate: 4.6         2020-09-19 (17320) 71MB -
  latest/beta:      ↑                                   
  latest/edge:      git-e1fa47b 2020-09-19 (17324) 71MB -
  4.6/stable:       –                                   
  4.6/candidate:    4.6         2020-09-19 (17320) 71MB -
  4.6/beta:         ↑                                   
  4.6/edge:         ↑                                   
...
$ 

Now, we refresh the LXD snap package into the latest/candidate channel.

$ snap refresh lxd --channel=latest/candidate
lxd (candidate) 4.6 from Canonical✓ refreshed
$ 

And that’s it. Oh no, it’s updating again.

NOTE: If you have setup the latest/candidate channel, you should be now switch to the latest/stable channel. LXD 4.6 has been released now into the stable channel. Use the following command

sudo snap refresh lxd --channel=latest/stable

The post continues…

With LXD you can run system containers, which are similar to virtual machines. Normally, you would use a system container to run network services. But you can also run X11 applications. See the following discussion and come back here. In this post, we further refine and simplify the instructions for the second way to run X applications. Previously I have written several tutorials on this.

LXD GUI profile

Here is the updated LXD profile to setup a LXD container to run X11 application on the host’s X server. Copy the following text and put it in a file, x11.profile. Note that the bold text below (i.e. X1) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.

config:
  environment.DISPLAY: :0
  environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
  nvidia.driver.capabilities: all
  nvidia.runtime: "true"
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
devices:
  PASocket1:
    bind: container
    connect: unix:/run/user/1000/pulse/native
    listen: unix:/home/ubuntu/pulse-native
    security.gid: "1000"
    security.uid: "1000"
    uid: "1000"
    gid: "1000"
    mode: "0777"
    type: proxy
  X0:
    bind: container
    connect: unix:@/tmp/.X11-unix/X1
    listen: unix:@/tmp/.X11-unix/X0
    security.gid: "1000"
    security.uid: "1000"
    type: proxy
  mygpu:
    type: gpu
name: x11
used_by: []

Then, create the profile with the following commands. This creates a profile called x11.

$ lxc profile create x11
Profile x11 created
$ cat x11.profile | lxc profile edit x11
$ 

To create a container, run the following.

lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer

To get a shell in the container, run the following.

lxc exec mycontainer -- sudo --user ubuntu --login

Once we get a shell inside the container, you run the diagnostic commands.

$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
OpenGL vendor string: NVIDIA Corporation
...
$ nvidia-smi 
 Mon Dec  9 00:00:00 2019       
+-------------------------------------------------------------------------+
| NVIDIA-SMI 430.50       Driver Version: 430.50       CUDA Version: 10.1 |    |---------------------------+----------------------+----------------------+
| GPU  Name    Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|     Memory-Usage | GPU-Util  Compute M. |
|===========================+======================+======================|
...
$ pactl info
 Server String: unix:/home/ubuntu/pulse-native
 Library Protocol Version: 32
 Server Protocol Version: 32
 Is Local: yes
 Client Index: 43
 Tile Size: 65472
 User Name: myusername
 Host Name: mycomputer
 Server Name: pulseaudio
 Server Version: 11.1
 Default Sample Specification: s16le 2ch 44100Hz
 Default Channel Map: front-left,front-right
 Default Sink: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1
 Default Source: alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1.monitor
 Cookie: f228:e515
$

You can run xclock which is an Xlib application. If it runs, it means that unaccelerated (standard X11) applications are able to run successfully.
You can run glxgears which requires OpenGL. If it runs, it means that you can run GPU accelerated software.
You can run paplay to play audio files. This is the PulseAudio audio play.
If you want to test with Alsa, install alsa-utils and use aplay to play audio files.

Explanation

We dissect the LXD profile in pieces.

We set two environment variables in the container. $DISPLAY for X and PULSE_SERVER for PulseAudio. Irrespective of the DISPLAY on the host, the DISPLAY in the container is always mapped to :0. While the PulseAudio Unix socket is often located eunder /var, in this case we put it into the home directory of the non-root account of the container. This will make PulseAudio accessible to snap packages in the container, as long as they support the home interface.

config:
environment.DISPLAY: :0
environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native

This enables the NVidia runtime with all the capabilities, if such a GPU is available. The text all for the capabilities means that it enables all of compute, display, graphics, utility, video. If you would rather restrict the capabilities, then graphics is for running OpenGL applications. And compute is for CUDA applications. If you do not have an NVidia GPU, then these directly will silently fail.

  nvidia.driver.capabilities: all
nvidia.runtime: "true"

Here we use cloud-init to get the container to perform the following tasks on the first time it starts. The sed command disables shm support in PulseAudio, which means that it enables the Unix socket support. Additionally, the listed three packages are installed with utilities to test X11 application, X11 OpenGL applications and audio applications.

  user.user-data: |
#cloud-config
runcmd:
- 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
packages:
- x11-apps
- mesa-utils
- pulseaudio

This command shares the Unix socket of the PulseAudio server on the host to the container. In the container it is /home/ubuntu/pulse-native. The security configuration refers to the host. The uid, gid and mode refer to the Unix socket in the container. This is a LXD proxy device, and binds into the container, meaning that it makes the host’s Unix socket appear in the container.

devices:
PASocket1:
bind: container
connect: unix:/run/user/1000/pulse/native
listen: unix:/home/ubuntu/pulse-native
security.gid: "1000"
security.uid: "1000"
uid: "1000"
gid: "1000"
mode: "0777"
type: proxy

This part shares the Unix socket of the X server on the host to the container. If $DISPLAY on your host is also :1, then keep the default shown below to X1. Otherwise, adjust the number accordingly. The @ character means that we are using abstract Unix sockets, which means that there is no actual file on the filesystem. Although /tmp/.X11-unix/X0 looks like an absolute path, it is just a name. We could have used myx11socket instead, for example. We use an abstract Unix socket so that it is also accessible by snap packages. We would have used an abstract Unix socket for PulseAudio, but PulseAudio does not support them. The security uid and gid refer to the host.

  X0:
bind: container
connect: unix:@/tmp/.X11-unix/X1
listen: unix:@/tmp/.X11-unix/X0
security.gid: "1000"
security.uid: "1000"
type: proxy

We make available the host’s GPU to the container. We do not need to specify explicitly which GPU we are using if we only have a single GPU.

  mygpu:
type: gpu

Installing software

You can install any graphical software. For example,

sudo apt-get install -y firefox

Then, run as usual.

firefox
Firefox running in a container.

Creating a shortcut for an X11 application in a LXD container

When we are running X11 applications from inside a LXD container, it is handy if we could have a .desktop on the host, and use it to launch the X11 application from the container. Below we do this. We perform the following on the host.

First, select an icon for the X11 application and place it into a sensible directory. In our case, we place it into ~/.local/share/icons/. You can find an appropriate icon from the resource files of the installed X11 application in the container. If we assume that the container is called steam and the appropriate icon is /home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png, then we can copy this icon to the ~/.local/share/icons/ folder on the host with the following command.

lxc file pull steam/home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png ~/.local/share/icons/

Then, paste the following in a text editor and save it as a file with the .desktop extension. For this example, we select steam.desktop. Fill in appropriately the Name, Comment, Exec command line and Icon.

[Desktop Entry]
 Name=Steam
 Comment=Play games on Steam
 Exec=lxc exec steam -- sudo --user ubuntu --login steam
 Icon=/home/user/.local/share/icons/steam_home.png
 Terminal=false
 Type=Application
 Categories=Game;

Finally, move the desktop file into the ~/.local/share/applications directory.

mv steam.desktop ~/.local/share/applications/

We can then look for the application on the host and place the icon on the launcher.

Notes

It has been reported that on the Raspberry Pi 4, using Ubuntu for both the host and the container, you get the following error. This error is on permission denied when trying to run GPU-accelerated applications in an (unprivileged) container as user ubuntu.

libGL error: failed to create dri screen
libGL error: failed to load driver: vc4
libGL error: failed to open /dev/dri/card1: Permission denied
libGL error: failed to open /dev/dri/card1: Permission denied
libGL error: failed to load driver: vc4

The device files in /dev/dri/ are owned by the Unix group video. The corresponding files inside the container on PC are owned by root and it works. However, on the RPi4, apparently you need to change the permissions for those files so that in the container, they are owned by the Unix group video. LXD has a field for the gpu device to set the group.

Therefore, change the following fragment of the LXD profile x11 from this

  mygpu:
    type: gpu

to include the gid propety with value 44 (the numeric value for the video Unix group, as shown in /etc/groups.

  mygpu:
    type: gpu
    gid: 44

Conclusion

This is the latest iteration of instructions on running GUI or X11 applications and having them appear on the host’s X server.

Note that the applications in the container have full access to the X server (due to how the X server works as there are no access controls). Do not run malicious or untrusted software in the container.

Permanent link to this article: https://blog.simos.info/running-x11-software-in-lxd-containers/

109 comments

3 pings

Skip to comment form

  1. Wanted to leave a quick comment and express my appreciation for your work on running GUI apps in LXD. For me, this is the apex web development environment.

    Previously I’d use sshfs or NFS and tried various sync apps and auto ftp apps so I could easily access source code files in the container from GUI apps on the host. Primarily my editor (atom or visual studio code), GUI diff (meld), a browser, and the occasional Postman or filezilla needed access to the files on the container. Always a hassle to setup and get working even with a few scripts that did most of the work.

    With your guide I created two scripts (one for the host and one for the container) that team up to launch a LXD container with a profile that has the GUI configuration, run apt on the container to install a bunch of packages needed for development (node, npm, yarn, FireFox developer edition, vscodium, gnome, etc), then from the host it execs firefox developer edition, vscodium, and a couple gnome terminals.

    GUI apps executing in the container, displaying on the host, and file operations (like File->Open) access the container filesystem and not the host filesystem. So, not only can this launch fully isolated dev environments, but launches fully isolated GUI apps too!

    Here is the unanticipated bonus, the container name is automatically added to the the title bar of each GUI app so no confusion as to which editor or terminal is running on which container.

    Example, a host running VSCodium on test and dev containers, in the title bars you’ll have: “VSCodium (on container-dev)” for the dev container and “VSCodium (on container-test)” for the testing container.

    1. Many thanks Bob for the feedback!

      Note that while you get filesystem isolation and full GPU acceleration, you do not get X11 isolation. This is a remnant of how X11 works.

  1. […] Update February 2021: This tutorial shows how to install manually the shared libraries for OpenGL in the container. The user needs to make sure that the library versions in the container matches those of the host. That was not very good. Now, LXD supports the NVidia container runtime, which is provided by NVidia and has the matching shared library versions from the host. Please, follow instead the new post on How to Run X11 Applications in a LXD container. […]

Leave a Reply to Misha Chitharanjan Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: