How to run an Incus VM inside an Incus VM (nested virtualization)

Incus is a manager for virtual machines (VM) and system containers. There is also an Incus support forum.

A virtual machine (VM) is an instance of an operating system that runs on a computer, along with the main operating system. A virtual machine uses hardware virtualization features for the separation from the main operating system. With virtual machines, the full operating system boots up in them.

A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container, instead, uses security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines. System containers reuse the running Linux kernel of the host, therefore you can only have Linux system containers, any Linux distribution.

In this post we see how to create a VM with Incus, install Incus into that VM, and then create a VM through the inner Incus installation. This is also called nested virtualization. Incus works fine with nested virtualization. Any pitfalls arise from the settings of the host (BIOS/UEFI settings, host Linux kernel, etc). We’ll see these together, step by step.

Table of Contents

Configuring your hardware for virtualization

You would need to enter into the BIOS/UEFI settings and enable the option for VT-x (for Intel CPUs) or AMD-V (for AMD CPUs) virtualization. If you are unsure, you can just follow the instructions in the next step which will complain if you have not enabled the appropriate BIOS/UEFI settings.

As a sidenote there is another setting, Intel VT-d (for Intel CPUs) or AMD-Vi (for AMD CPUs) that allow to move a supported hardware device (like a GPU, if you have more than one) into the VM. Not essential for what we are testing, but keep that in mind if you get too deep into virtualization.

There are also some additional options that are optional, AMD Nested Page tables (NPT) (for AMD) and Rapid Virtualization Indexing (RVI)/Intel Extended Page Tables (EPT) for Intel. These help for performance.

Testing your host for virtualization

The Linux kernel that is available in most Linux distributions supports the KVM hypervisor for virtualization.

Applications use the libvirttoolkit to access the virtualization features.

In order to test if our host supports virtualization, we install cpu-checker and the libvirt-clients packages on the host and then run kvm-ok and virt-host-validate respectively to verify our system. Compared between the two utilities, the latter is better. However, I am including cpu-checkeras it is covered in lots of documentation.

$ sudo apt install -y cpu-checker libvirt-clients
...
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
$ sudo virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : PASS
  QEMU: Checking if IOMMU is enabled by kernel                               : PASS
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'freezer' controller support                     : FAIL (Enable 'freezer' in kernel Kconfig file or mount/enable cgroup controller in your system)
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking if device /sys/fs/fuse/connections exists                   : PASS
$ 

If you get a failure, try to identify whether the issue is with your computer’s firmware or with the Linux kernel of your host. If in doubt, post below the output.

Why does the output mention both QEMU and LXC? By default, the command shows all libvirt virtualization support, unless you specify something specific. If you wanted only the QEMU output, you would run sudo virt-host-validate qemu. Note that the LXC here is not the Linux Containers LXC. The LXC above is the Libvirt LXC.

Testing your host for nested virtualization

I have not noticed any mention for nested virtualization in the output of virt-host-validate. If you know a tool that shows that information, write it in the comments. In the absence of such a tool, let’s check manually.

If you have an AMD CPU, run the following. If you get 1, then nested virtualization through KVM works.

$ cat /sys/module/kvm_amd/parameters/nested 
1
$ 

If instead you have an Intel CPU, run the following. If you get Y(instead of 1), then nested virtualization through KVM works.

$ cat /sys/module/kvm_intel/parameters/nested
Y
$ 

If instead you get an error (such as the following), then something is wrong. Report back your CPU model and motherboard, along with the Linux kernel version and Linux distribution.

cat: /sys/module/kvm_intel/parameters/nested: No such file or directory

Launching the outer Incus VM

We launch the outer VM. Get a shell into the outer VM, install Incus and those utilities that show whether KVM virtualization works. Then, we launch an Alpine VM in the outer VM. We get an error regarding Secure Boot (the Alpine Linux kernel is not signed), remove the stuck VM and launch again with Secure Boot disabled. Finally, we get a shell into the inner VM.

$ incus launch images:debian/12 outervm --vm
Launching outervm
$ incus shell outervm
root@outervm:~#

      # Install Incus according to the documentation.

root@outervm:~# sudo apt install -y cpu-checker libvirt-clients
...
root@outervm:~# virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
...
root@outervm:~# incus launch images:alpine/edge innervm --vm
Launching innervm
Error: Failed instance creation: The image used by this instance is incompatible with secureboot. Please set security.secureboot=false on the instance
root@outervm:~# incus delete innervm
root@outervm:~# incus launch images:alpine/edge innervm --vm --config security.secureboot=false
Launching innervm
root@outervm:~# incus list -c ns4t
+---------+---------+-----------------------+-----------------+
|  NAME   |  STATE  |         IPV4          |      TYPE       |
+---------+---------+-----------------------+-----------------+
| innervm | RUNNING | 10.227.169.165 (eth0) | VIRTUAL-MACHINE |
+---------+---------+-----------------------+-----------------+
root@outervm:~# uname -a
Linux outervm 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
root@outervm:~# incus shell innervm
innervm:~# uname -a
Linux innervm 6.6.22-1-virt #2-Alpine SMP PREEMPT_DYNAMIC Thu, 14 Mar 2024 02:12:52 +0000 x86_64 Linux
innervm:~# 

Conclusion

We saw how to verify whether our host is able to work with hardware virtualization. This involves checking both the computer firmware settings (BIOS/UEFI) and the host Linux kernel.

Then, we created an outer VM with Incus, got a shell into there, installed Incus, and launched an inner (nested) VM.

I wonder whether we can go further and create a VM inside the inner VM. If you go through these and try to create an inner inner VM, post the error message. It does not feel like it should be possible.

Permanent link to this article: https://blog.simos.info/how-to-run-an-incus-vm-inside-an-incus-vm-nested-virtualization/

7 comments

Skip to comment form

    • Muslu Yüksektepe on March 27, 2024 at 15:44
    • Reply

    Thanks for sharing Simos.

    1. Thank you Muslu!

    • Uli Heller on March 27, 2024 at 23:00
    • Reply

    uli@ulicsl:~$ cat /sys/module/kvm_amd/parameters/nested
    1
    uli@ulicsl:~$ ssh helsinki cat /sys/module/kvm_intel/parameters/nested
    Y

    … so for Intel, I get a “Y” instead of a “1”

    1. Thanks Uli! I updated the post to reflect this.

    • Luis on April 2, 2024 at 14:37
    • Reply

    Hi Simus, always posting great content!
    I’m just starting on this LXD/Incus world and I’ve got a basic conceptual question… Does Incus leverage KVM (alongside QEMU) to virtualize servers? I’m always seeing QEMU references but knowing KVM is part of Linux kernel I’m a bit confused about this.

    Thanks.

      • Luis on April 2, 2024 at 14:38
      • Reply

      *** Misspelled your name in the previous comment, sorry! Simos

    1. Hi Luis!

      Yes, Incus uses KVM plus QEMU to virtualize servers. KVM is a Type 1 hypervisor and QEMU is a Type 2 hypervisor but also an emulator. In Incus, they are used together and it’s a common way to use them.

      KVM by itself does not have the ability to emulate most devices apart from the CPU and the RAM. You would use QEMU over KVM for that.

      Incus comes with its own packaged QEMU, which is separate from your system’s QEMU (if you installed those packages).

      Here is the current version of QEMU from Incus.

      $ /opt/incus/bin/qemu-system-x86_64 --version
      QEMU emulator version 8.2.2 (v8.2.2-dirty)
      Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers
      $

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.