Update #1 (26 December 2019): There is a newer overall post that describes the different ways to run a GUI program in a LXD container. And there is a fresh and simpler post that replaces this one.
Original post continues below…
I like to take care of my desktop Linux and I do so by not installing 32-bit libraries. If there are any old 32-bit applications, I prefer to install them in a LXD container. Because in a LXD container you can install anything, and once you are done with it, you delete it and poof it is gone forever!
In the following I will show the actual commands to setup a LXD container for a system with an NVidia GPU so that we can run graphical programs. Someone can take these and make some sort of easy-to-use GUI utility. Note that you can write a GUI utility that uses the LXD API to interface with the system container.
You are running Ubuntu 19.10.
You are using the snap package of LXD.
You have an NVidia GPU.
Setting up LXD (performed once)
sudo snap install lxd
Set up LXD. Accept all defaults. Add your non-root account to the lxd group. Replace myusername with your own username.
sudo lxd init usermod -G lxd -a myusername newgrp lxd
You have setup LXD. Now you can create containers.
Creating the system container
Launch a system container. You can create as many as you wish. This one we will call steam and will put Steam in it.
lxc launch ubuntu:18.04 steam
Create a GPU passthrough device for your GPU.
lxc config device add steam gt2060 gpu
Create a proxy device for the X11 Unix socket of the host to this container. The proxy device is called X0. The abstract Unix socket @/tmp/.X11-unix/X0 of the host is proxied into the container. The 1000/1000 is the UID and GID of your desktop user on the host.
lxc config device add steam X0 proxy listen=unix:@/tmp/.X11-unix/X0 connect=unix:@/tmp/.X11-unix/X0 bind=container security.uid=1000 security.gid=1000
Get a shell into the system container.
lxc exec steam -- sudo --user ubuntu --login
Add the NVidia 430 driver to this Ubuntu 18.04 LTS container, using the PPA. The driver in the container has to match the driver on the host. This is an NVidia requirement.
sudo add-apt-repository ppa:graphics-drivers/ppa
Install the NVidia library, both 32-bit and 64-bit. Also install utilities to test X11, OpenGL and Vulkan.
sudo apt install -y libnvidia-gl-430
sudo apt install -y libnvidia-gl-430:i386
sudo apt install -y x11-apps mesa-utils vulkan-utils
Set the $DISPLAY. You can add this into ~/.profile as well.
export DISPLAY=:0echo export DISPLAY=:0 >> ~/.profile
Enjoy by testing X11, OpenGL and Vulkan.
ubuntu@steam:~$ glxinfo name of display: :0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.4 server glx extensions: GLX_ARB_context_flush_control, GLX_ARB_create_context, ...
ubuntu@steam:~$ vulkaninfo =========== VULKANINFO =========== Vulkan Instance Version: 1.1.101 Instance Extensions: ==================== Instance Extensions count = 16 VK_EXT_acquire_xlib_display : extension revision 1 ...
The system is now ready to install Steam, and also Wine!
We grab the deb package of Steam and install it.
sudo dpkg -i steam.deb
sudo apt install -f
Then, we run it.
Here is some sample output.
ubuntu@steam:~$ steam Running Steam on ubuntu 18.04 64-bit STEAM_RUNTIME is enabled automatically Pins up-to-date! Installing breakpad exception handler for appid(steam)/version(0) Installing breakpad exception handler for appid(steam)/version(1.0) Installing breakpad exception handler for appid(steam)/version(1.0) ...
Here is how you install Wine in the container.
sudo dpkg --add-architecture i386
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key
sudo apt update
sudo apt install --install-recommends winehq-stable
There are options to run legacy 32-bit software, and here we show how to do that using LXD containers. We pick NVidia (closed-source drivers) which entails a bit of extra difficulty. You can create many system containers and put in them all sorts of legacy software. Your desktop (host) remains clean and when you are done with a legacy app, you can easily remove the container and it is gone!
Ubuntu should no longer be considered as a Desktop distro. Rather see them in the same light as Centos. A solid server distro but not as a distro that caters for desktop users.
They screwed up big time with desktop users. I for one, no longer trust in their decision making, Dropped Unity, dropped ubuntu touch/phone and now the last straw, dropped i386 support (but also rewording their statements about dropping i386 support).
I will be moving to manjaro!
Using a distro can either make you a passive consumer or an active contributor.
All distros, from bigger to smaller, are in need of contributors.
Become a constructive contributor to the distribution of your choice!
Canonical have made a pretty clear statement now that they are no longer dropping i386, so you no longer need to move! https://blog.ubuntu.com/2019/06/24/statement-on-32-bit-i386-packages-for-ubuntu-19-10-and-20-04-lts
I think the statement is more nuanced. i386 packages are on the way out, it is just some core i386 libraries that will get a reprieve for now. Until the alternatives are put fully in place.
Thanks for your effort to evangelise lxd.
Right now, I use Crossover (paid version of Wine), to run MS Office. Wine and Crossover are on-track to drop support for Ubuntu due to no multiarch, although it has not been confirmed. Will lxd be a way an end-user can install Crossover? Can lxd be integrated into the desktop so that it provides a seamless way of associating a file type with an application hosted inside an lxd container (in other words, can lxd provide the same user experience as Ubuntu provides now: easy installation and desktop integration?
LXD (specifically the system containers) were initially created for the use in servers, as a way to virtualize a whole server using containers and not VMs (that is, make them very lightweight).
As LXD was evolving, it was easy to add functionality of share Unix sockets between the host and the containers, therefore, that makes it rather easy to run GUI applications in the system container and get the output on the desktop.
Some people have commented that GUI on LXD is some sort of afterthought, therefore they are not suitable. Looks like a continuation of the talk that “snaps are for servers not GUI”, which is silly as well.
In technological terms, you can pivot and it is fine to do so.
Do you know that the Linux support in Chromebooks is performed using LXD system containers?
Here is the post, https://blog.simos.info/a-closer-look-at-chrome-os-using-lxd-to-run-linux-gui-apps-project-crostini/
Can LXD get integrated in the desktop? Yes it can. The Chromebook people have done so with great success.
With my blog posts I discuss the primitives of getting LXD containers on the desktop. If you can do it by hand, then you can automate.
With apologies for being dim, how critical is the first prerequisite?
What would happen if you tried this on 18.04 LTS, for example?
Similarly, where is all this documented? It’s not at linuxcontainers.org, which barely explains what it can do. If you can find anyone who is new to this who can answer any of these questions based on the official documentation page (https://linuxcontainers.org/lxc/documentation/) I will be astounded. It’s very sparse documentation for people interested in the API.
Specifically, where is the documentation for the “lxc config device add steam gt2060 gpu” line – presumably it varies according to what Nvidia GPU the host has? What if I have a GPU that’s not from NVidia (sic)? Do AMD/ATI/Intel ones ‘just work’ or…?
If I look at the man page for lxc.config (https://linuxcontainers.org/lxc/manpages//man1/lxc-config.1.html) I am told that “lxc-config queries the lxc system configuration and lets you list all valid keys or query individual keys for their value.” Gosh.
The first prerequisite is a soft requirement just for this post. It just makes it easier not to go deeper into LXD features and then possibly lose the reader. You can do all these with Ubuntu 16.04, Ubuntu 18.04, Ubuntu 19.04 as well, as long as you install the LXD snap package (channel: stable, which is the default).
This post, https://blog.simos.info/how-to-easily-run-graphics-accelerated-gui-apps-in-lxd-containers-on-your-ubuntu-desktop/ describes is longish and is much broader: It covers
Regarding the documentation for the “LXD devices”, see https://github.com/lxc/lxd/blob/master/doc/containers.md#device-types In this post I discuss using the “proxy” device (for Unix socket). In other posts on this blog I also discuss using “disk” device (bind mounting the socket).
If the GPU is not NVidia (closed-source), then it just works. ;-]
There is “LXC” and there is “LXD”. The plain “LXC” is the initial implementation of Linux containers, and the commands are in the form of “lxc-config”, “lxc-create”, etc. That’s the old way. New users are suggested to try the new way, with LXD, because it offers many easy features. The commands look like “lxc list”, “lxc launch …”, etc. For more, see https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24
Thanks for the great guides! I used this and https://blog.simos.info/running-steam-in-a-lxd-system-container/ to try and get i386 graphic apps working with the nvidia card on an Optimus system (ie with the Intel card running as the main card) with no i386 libraries on the host. I couldn’t find anything specific about lxd and Optimus, but after a lot of experimentation, I got it working (on Ubuntu 19.10, using bumblebee 3.2.1+git20181231-103~bionicppa1, lxd 3.14 and nvidia-driver-430 430.26-0ubuntu1).
The key is to run an nvidia X server on the host using optirun and to share the nvidia card and X socket before you start the container. For example, you can achieve this by launching “optirun nvidia-settings” and leaving it running. Then you need to passthrough /dev/nvidiactl, /dev/nvidia0, /tmp/.X11-unix/X8 and /lib/modules to the container (I added these to the gui profile the container is using, all as type disk). I found /lib/modules is needed because the container builds the nvidia module for an older version of the kernel than my host’s kernel.
Then in the container I installed both bumblebee and the same nvidia driver as on the host and I can now use optirun in the container to run graphics apps like wine on the host using the nvidia card (using DISPLAY=:0, ie the Intel display on the host). [Note: I actually installed the nvidia driver first on the container, then shared /lib/modules from the host and restarted the container. I don’t know if it would fail to install the nvidia driver on the container with /lib/modules shared if the container can’t write to this location.]
I also tried running apps directly on the host nvidia server, but there’s no graphics output. “DISPLAY=:8 glxinfo -B” shows it has DRI and is running on the nvidia card, but “DISPLAY=:8 glxgears” doesn’t show anything, even though it reports its frame rates back. Maybe this is because “DISPLAY=:8” doesn’t report any physical monitors or modelines, only a screen set to 640×480.
Generally it works well, but occasionally when I try to run something I get a X “Bad Match” error. glxgears almost always gives this error, but xclock works fine, and wine works near to 100% of the time.
Another note is that I also found I have to start lxd manually using “sudo snap start” after I reboot the host (Ubuntu 19.10 only lets you install lxd from a snap package).
Thanks for posting this feedback!
When you install the NVidia libraries in the container, you can use “libnvidia-gl-430” instead of “nvidia-driver-430”. The latter tries to setup the kernel modules in the container which are not used anyway in containers. Depending on the applications you will be running, you may need both “libnvidia-gl-430” and “libnvidia-gl-430:i386”.
When you system boots up and LXD starts the containers, your GUI container will probably not autostart, even if you instruct LXD to autostart it. This is due to LXD not finding the X11 Unix socket when it starts the container, because you have not logged on yet to X.
When you run
glxgearsand you errors about “Bad match” or “swrast”, then it is likely you are running, for example, “glxgears:i386” while the video driver libraries are 64-bit. Or the other way around.
You should always be able to run “xclock” because it does not require hardware acceleration, just X11 support. Therefore, for testing, you start of with “xclock”, and if that works, you continue with “glxgears”.
Great, thanks for the tip about libnvidia-gl-430.
I found what was causing glxgears to fail to run: I need to export __GLVND_DISALLOW_PATCHING=1 before calling optirun for 64 bit apps. I need to do this in the host as well, but I’d forgotten because I set it in my .bashrc file and forgot about it.
I can’t seem to get this to work, I skipped the nvidia-specific stuff because I’m using an amdgpu compatible RX570 instead on a desktop system. Xclock and the glx and vulkan testing programs fail, saying they can’t open display :0.
I updated this post (see text at the top) with a newer version that should be simpler to follow,
The big issue is to verify what value you get with $DISPLAY and adapt the profile accordingly.
Another issue to watch for, is that it takes several minutes for the newly created GUI container to load up all packages. If you get a shell in the container too early, then this session would not work. You would need to reconnect.
One more issue is that the instructions in this post, is that when you restart the computer, you would need to restart the container manually as well. This has to do with the fact that LXD will start your GUI container much faster than the X11 session on your computer. And due to that, the container will have started without having access to the X11 socket at the host. The new instructions do away with this issue.