How to repartition a Hetzner VPS disk for ZFS on its own partition for LXD

In that post, we saw how to set up LXD on a Hetzner VPS (Hetzner Cloud). The particular VPS comes with a fixed size partition, and we had to put the LXD containers in a loopback device file, not on its own partition. The reason is that once the VPS is booted up, you cannot repartition the disk.

In this post, we see how to repartition the disk and take some of the free space in order to make a separate partition. Then, give that partition to LXD to create a storage driver with ZFS and store there the containers.

The benefits of putting ZFS on its own partition are twofold: first, the performance is better, and second, we are using more of the good features of ZFS (like data safety).

Prerequisites

Perform the following,

  1. Set up a VPS according to the instructions at A closer look at the new Hetzner cloud servers, by running LXD. As operating system, select Ubuntu 16.04 LTS. You can select the entry-level CX11 VPS (20GB of disk space) or something bigger. The CX11 VPS type does not give you much options for a production LXD installation. It is good option though to gain familiarity with the whole process.
    You just need to reach to the point where you click to Create & Buy Now, then come back to continue here.

That’s it, we are good to go. We are then going to boot into rescue mode. From there, we will do the repartitioning and the installation again of Ubuntu 16.04 LTS in a smaller partition, so that there is space left for ZFS in its own space.

Booting the Hetzner VPS into rescue mode

The VPS is already started. You can reboot it, and get it boot into the rescue system. From the rescue system, there are several options, one of them being the ability to repartition. To boot into the rescue system, go to the Hetzner cloud management website, click on this server and select RESCUE.

In RESCUE, click on the ENABLE RESCUE & POWER CYCLE.

You can also open the console, so that you can view the boot messages when the VPS is booting. It gives you a good understanding to know when the VPS has been booted, so that you can connect with SSH. In addition, you can connect to the VPS through the console. In that case, you would need to click on RESET ROOT PASSWORD in order to get a password for root. Because by default, the Ubuntu images come with a locked root account (regarding the password), the reason being that you normally just connect with SSH Public Key authentication (no SSH password authentication).

As soon as you click on ENABLE RESCUE & POWER CYCLE, you are asked for the following details, which image to boot into, and which SSH key to use. As Rescue OS, select linux64. For SSH Key, select one of your SSH keys. Click on the red bar that says ENABLE RESCUE & POWER CYCLE to boot into the rescue system.

As your VPS is booting into the rescue system, you have a few tens of seconds to take a copy of your VPS root password. It looks like this. You do not need the password unless you are connecting through the console. The SSH connection with SSH Public Key authentication is good enough.

Here is how the console looks like when the VPS has been booted into the rescue system. At the end you get the prompt to log in as root, providing the password that was given earlier.

The rescue system has started and we can connect with SSH to perform the repartitioning tasks.

Connecting to the rescue system with SSH

You can now connect with SSH to the rescue system. Type the appropriate IP address of your own VPS.

$ ssh root@195.201.40.9
The authenticity of host '195.201.40.9 (195.201.40.9)' can't be established.
ECDSA key fingerprint is SHA256:ztD75Y6c6juxgPm+0wW+eZrOd5CpXwKqtB/j6TohlaU.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '195.201.40.9' (ECDSA) to the list of known hosts.

-------------------------------------------------------------------

Welcome to the Hetzner Rescue System.

This Rescue System is based on Debian 8.0 (jessie) with a newer
 kernel. You can install software as in a normal system.

To install a new operating system from one of our prebuilt
 images, run 'installimage' and follow the instructions.

More information at http://wiki.hetzner.de

-------------------------------------------------------------------

Hardware data:

CPU1: Intel Xeon Processor (Skylake)
 Memory: 1894 MB
 Disk /dev/sda: 20 GB (=> 19 GiB) 
 Total capacity 19 GiB with 1 Disk

Network data:
 eth0 LINK: yes
 MAC: 96:00:00:07:ff:bb
 IP: 195.201.40.9
 IPv6: 2a01:4f8:1c0c:601a::2/64
 Virtio network driver

root@rescue ~ #

You are in, and you are informed of a command, installimage, that can be used to install prebuilt images.

The next step is to look into the rescue system and do that repartitioning.

Troubleshooting

Help! “WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

I got the following!

$ ssh root@195.201.40.9
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:ztD75Y6c6juxgPm+0wW+eZrOd5CpXwKqtB/j6TohlaU.
Please contact your system administrator.
Add correct host key in /home/myusername/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/myusername/.ssh/known_hosts:5
 remove with:
 ssh-keygen -f "/home/myusername/.ssh/known_hosts" -R 195.201.40.9
ECDSA host key for 195.201.40.9 has changed and you have requested strict checking.
Host key verification failed.
Exit 255

When you connect with SSH to a server, SSH asks you for confirmation of the SSH server host key fingerprint. Then, SSH saves the IP address of the server along with that fingerprint value. But when you connect again with SSH to the same server IP, SSH expects to find the same fingerprint. If it does not, SSH complains and does not let you connect. In this case, the rescue system created a SSH host key (thus, a different fingerprint).

You can get away from this issue by running the suggested command that removes the saved old fingerprint. Then, you can connect again with SSH, and get prompted as to whether you accept the fresh fingerprint. Select Yes to accept it.

$ ssh-keygen -f "/home/user/.ssh/known_hosts" -R 195.201.40.9
# Host 195.201.40.9 found: line 5
/home/myusername/.ssh/known_hosts updated.
Original contents retained as /home/myusername/.ssh/known_hosts.old

Help! Why should I accept the new host key? MITM, no?

Hmm, potentially yes. Log on through the console then.

Looking into the rescue system

What files are in the rescue system? Some configs and some images.

root@rescue ~ # ls -l
total 0
lrwxrwxrwx 1 root root 28 May 17 2013 configs -> .oldroot/nfs/install/configs
drwxr-xr-x 2 root root 80 Feb 21 00:34 hwcheck-logs
lrwxrwxrwx 1 root root 19 May 17 2013 images -> .oldroot/nfs/images
root@rescue ~ #

What is in the configs?

root@rescue ~ # cd configs/
root@rescue ~/configs # ls -l
total 36K
-rw-r--r-- 1 root root 851 Jan 12 2015 hsa-baculadir
-rw-r--r-- 1 root root 957 Sep 28 2015 hsa-managed
-rw-r--r-- 1 root root 596 Jan 31 2014 hsa-minimal64
-rw-r--r-- 1 root root 747 Jan 31 2014 hsa-sql
-rw-r--r-- 1 root root 437 Dec 14 10:12 proxmox4
-rw-r--r-- 1 root root 437 Dec 14 10:12 proxmox5
-rw-r--r-- 1 root root 243 Dec 14 10:12 simple-debian64-noraid
-rw-r--r-- 1 root root 230 Dec 14 10:12 simple-debian64-raid
-rw-r--r-- 1 root root 267 Dec 14 10:12 simple-debian64-raid-lvm
lrwxrwxrwx 1 root root 20 Feb 8 2017 standard -> simple-debian64-raid
root@rescue ~/configs # cat standard

DRIVE1 /dev/sda
DRIVE2 /dev/sdb

SWRAID 1
SWRAIDLEVEL 1

BOOTLOADER grub

HOSTNAME Debian-93-stretch-64-minimal

PART swap swap 2G
PART /boot ext3 512M
PART / ext4 all

IMAGE /root/images/Debian-93-stretch--64-minimal.tar.gz
root@rescue ~/configs #

Those config files describe how to set up a system. What partitions to create and which image to use for the operating system. You are going to use something like this to repartition the VPS disk.

What is in the images/ directory?

root@rescue ~/configs # cd
root@rescue ~ # cd images
root@rescue ~/images # ls -l
total 3.9G
-rw-r--r-- 1 root root 0 Dec 14 17:31 Archlinux-2017-64-minimal.tar.gz
-rw-r--r-- 1 root root 294M Jan 17 14:38 CentOS-69-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:29 CentOS-69-64-minimal.tar.gz.sig
-rw-r--r-- 1 root root 371M Jan 17 14:57 CentOS-74-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:29 CentOS-74-64-minimal.tar.gz.sig
-rw-r--r-- 1 1000 1000 268M Feb 28 2017 CoreOS-1298-64-production.bin.bz2
-rw-r--r-- 1 1000 1000 564 Feb 28 2017 CoreOS-1298-64-production.bin.bz2.sig
-rw-r--r-- 1 root root 287M Jan 17 15:00 Debian-810-jessie-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:30 Debian-810-jessie-64-minimal.tar.gz.sig
-rw-r--r-- 1 root root 223M Oct 13 13:56 Debian-89-jessie-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Oct 13 14:14 Debian-89-jessie-64-minimal.tar.gz.sig
-rw-r--r-- 1 root root 446M Jan 17 14:57 Debian-93-stretch-64-LAMP.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:31 Debian-93-stretch-64-LAMP.tar.gz.sig
-rw-r--r-- 1 root root 320M Jan 17 14:50 Debian-93-stretch-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:30 Debian-93-stretch-64-minimal.tar.gz.sig
-rw-r--r-- 1 1000 1000 80 Sep 29 2011 README
-rw-r--r-- 1 root root 364M Jan 17 14:42 Ubuntu-1604-xenial-64-minimal-no-hwe.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:33 Ubuntu-1604-xenial-64-minimal-no-hwe.tar.gz.sig
-rw-r--r-- 1 root root 368M Jan 17 11:26 Ubuntu-1604-xenial-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:32 Ubuntu-1604-xenial-64-minimal.tar.gz.sig
-rw-r--r-- 1 root root 551M Jan 17 15:15 Ubuntu-1604-xenial-64-nextcloud.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:34 Ubuntu-1604-xenial-64-nextcloud.tar.gz.sig
-rw-r--r-- 1 root root 409M Jan 17 11:27 Ubuntu-1710-artful-64-minimal.tar.gz
-rw-r--r-- 1 1000 1000 473 Jan 17 15:34 Ubuntu-1710-artful-64-minimal.tar.gz.sig
-rw-r--r-- 1 root root 0 Dec 14 17:06 archlinux-latest-64-minimal.tar.gz
root@rescue ~/images # cd
root@rescue ~ #

There are several images. We note down the Ubuntu 16.04 images. They are called minimal, therefore they might be a bit smaller than the standard Ubuntu image. The size is still big, so I would not consider there are too many things missing.

We are ready to run installimage and see where it takes us. The disk size is 20GB (actual 19.1GB), which is what we need for the repartitioning.

Running installimage

Run installimage.

root@rescue ~ # installimage

You are presented with the following.

Select Ubuntu (using the arrow keys) and press on OK.

Then, select the image shown below. It is the Ubuntu 16.04 image, called minimal. The first image mentions no hwe, which refers to the HardWare Enablement stack. No HWE means not updated kernel, therefore Linux 4.4. Our choice shown below is for Linux kernel 4.13, the current HWE Linux kernel for Ubuntu 16.04. Press on OK.

As the notice says, a text editor will open and ask us to write up a config file. Let’s do this!

Here is a configuration file with PART commands (for Partitions) and an IMAGE command to specify the image. The default values are for these two, are as follows. There is one partition, the root (“/“) partition in the ext4 filesystem, and uses all of the space of the disk. Then, there is the Ubuntu image we selected earlier.

PART / ext4 all

IMAGE /root/.oldroot/nfs/install/../images/Ubuntu-1604-xenial-64-minimal.tar.gz

Edit the file so that the PART line is replaced with the following. There is 1GB for swap, and 4GB for the root partition. The rest is unallocated and will get allocated later for ZFS.

## our partitioning:
## -> 1GB swapspace
## -> 4GB /
## -> the rest is unpartitioned

PART swap swap 1G
PART / ext4 4G

Press F10 and accept to save.

Here is installimage working and performing all the tasks to rebuild the system. It created the two partitions /dev/sda1 and /dev/sda2, and left the rest unallocated.

Time to reboot. Run shutdown -r now.

root@rescue ~ # shutdown -r now

Broadcast message from root@Ubuntu-1604-xenial-64-minimal on pts/1 (Wed 2018-02-21 01:51:56 CET):

The system is going down for reboot NOW!

root@rescue ~ # Connection to 195.201.40.9 closed by remote host.
Connection to 195.201.40.9 closed.
Exit 255

Connecting to the repartitioned VPS

Connect again to the VPS using SSH. But you get a warning.

$ ssh root@195.201.40.9
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:ztD75Y6c6juxgPm+0wW+eZrOd5CpXwKqtB/j6TohlaU.
Please contact your system administrator.
Add correct host key in /home/myusername/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/myusername/.ssh/known_hosts:5
 remove with:
 ssh-keygen -f "/home/myusername/.ssh/known_hosts" -R 195.201.40.9
ECDSA host key for 195.201.40.9 has changed and you have requested strict checking.
Host key verification failed.
Exit 255

When you connect with SSH to a server, SSH asks you for confirmation of the SSH server host key fingerprint. Then, SSH saves the IP address of the server along with that fingerprint value. But when you connect again with SSH to the same server IP, SSH expects to find the same fingerprint. If it does not, SSH complains and does not let you connect. In this case, the rescue system created a SSH host key (thus, a different fingerprint).

You can get away from this issue by running the suggested command that removes the saved old fingerprint. Then, you can connect again with SSH, and get prompted as to whether you accept the fresh fingerprint. Select Yes to accept it.

$ ssh-keygen -f "/home/user/.ssh/known_hosts" -R 195.201.40.9
# Host 195.201.40.9 found: line 5
/home/myusername/.ssh/known_hosts updated.
Original contents retained as /home/myusername/.ssh/known_hosts.old

Now, you can connect with SSH.

$ ssh root@195.201.40.9
The authenticity of host '195.201.40.9 (195.201.40.9)' can't be established.
ECDSA key fingerprint is SHA256:8YcC+u3vc2s+sDEDYxJL5W8f4tH92USHiSHVmRVLcGw.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '195.201.40.9' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.13.0-26-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
root@Ubuntu-1604-xenial-64-minimal ~ # uname -a
Linux Ubuntu-1604-xenial-64-minimal 4.13.0-26-generic #29~16.04.2-Ubuntu SMP Tue Jan 9 22:00:44 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
root@Ubuntu-1604-xenial-64-minimal ~ #

You connected, and the VPS is running the Linux kernel 4.13.

Creating a partition for the free unallocated space

How much space is there now for the root partition? How do the partitions look like?

root@Ubuntu-1604-xenial-64-minimal ~ # df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda2  3.9G 1.1G  2.6G  30% /
root@Ubuntu-1604-xenial-64-minimal ~ # fdisk -l
Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfcbaf0fd

Device    Boot Start   End      Sectors Size Id Type
/dev/sda1      2048    2099199  2097152   1G 82 Linux swap / Solaris
/dev/sda2      2099200 10487807 8388608   4G 83 Linux
root@Ubuntu-1604-xenial-64-minimal ~ #

Create a partition for the unallocated free space. The command fdisk can create the partition.

root@Ubuntu-1604-xenial-64-minimal ~ # fdisk /dev/sda

Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type
 p primary (2 primary, 0 extended, 2 free)
 e extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3): 3
First sector (10487808-40001535, default 10487808): <press enter to accept the default>
Last sector, +sectors or +size{K,M,G,T,P} (10487808-40001535, default 40001535): <ditto>

Created a new partition 3 of type 'Linux' and of size 14.1 GiB.

Command (m for help): p
Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfcbaf0fd

Device    Boot Start    End      Sectors   Size Id Type
/dev/sda1      2048     2099199  2097152     1G 82 Linux swap / Solaris
/dev/sda2      2099200  10487807 8388608     4G 83 Linux
/dev/sda3      10487808 40001535 29513728 14.1G 83 Linux

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy

The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).

root@Ubuntu-1604-xenial-64-minimal ~ #

The new partition /dev/sda3 has been created, but we still need to reboot so that the Linux kernel fully accepts the addition.

root@Ubuntu-1604-xenial-64-minimal ~ # shutdown -r now
Connection to 195.201.40.9 closed by remote host.
Connection to 195.201.40.9 closed.
Exit 255

$ ssh root@195.201.40.9
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.13.0-26-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
root@Ubuntu-1604-xenial-64-minimal ~ #

Now you can set up for LXD so that you can later run lxd init to initialize LXD.

Setting up for LXD

First, perform a package index update. Then, install lxd and zfsutils-linux (for ZFS support).

root@Ubuntu-1604-xenial-64-minimal ~ # apt update
root@Ubuntu-1604-xenial-64-minimal ~ # apt install lxd zfsutils-linux

That’s it. Time for lxd init.

Initializing LXD

Run lxd init to initialize LXD. Compared to other initializations, here you specify that you want to use an existing partition for LXD.

root@Ubuntu-1604-xenial-64-minimal ~ # lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? <press enter = default>
Name of the storage backend to use (dir or zfs) [default=zfs]: <press enter = use default>
Create a new ZFS pool (yes/no) [default=yes]? <press enter = use default>
Name of the new ZFS pool or dataset [default=lxd]: <press enter = use default>
Would you like to use an existing block device (yes/no) [default=no]? yes
Path to the existing block device: /dev/sda3
Would you like LXD to be available over the network (yes/no) [default=no]? <press enter>
Do you want to configure the LXD bridge (yes/no) [default=yes]? <press enter>

LXD has been successfully configured.
root@Ubuntu-1604-xenial-64-minimal ~ #

And that’s it.

At this stage, you can create a non-root user and configure them to use LXD. Time for some testing of LXD!

Testing LXD

Create a few containers, c1, c2 and c3.

root@Ubuntu-1604-xenial-64-minimal ~ # lxc launch ubuntu: c1
Creating c1
Starting c1 
root@Ubuntu-1604-xenial-64-minimal ~ # time lxc launch ubuntu: c2
Creating c2
Starting c2

real 0m1.686s
user 0m0.006s
sys 0m0.011s
root@Ubuntu-1604-xenial-64-minimal ~ # time lxc launch ubuntu: c3
Creating c3
Starting c3

real 0m1.722s
user 0m0.013s
sys 0m0.004s
root@Ubuntu-1604-xenial-64-minimal ~ #

You can time the creation of the containers. In this case, it took about 1.7 seconds to create a container. Note that this VPS has just a single vCPU.

How much space did the three containers use? The three containers, including the container image, used in total 322MB! How is this possible?

root@Ubuntu-1604-xenial-64-minimal ~ # zpool list
NAME SIZE ALLOC  FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd   14G  322M 13,7G -          1%  2% 1.00x ONLINE -
root@Ubuntu-1604-xenial-64-minimal ~ #

With zfs list, you have an itemized bill of the space that is used. The total is 321MB. This is made primarily of the container image at 303MB, and then the 17,6MB for the space of the there containers together! Each container currently uses only 5.8MB. The reason for this is that ZFS uses copy-on-write. Initially, all containers are almost identical with the container image. Therefore, they inherit all the files of the container image and do not need any extra allocations apart from the tiny 5.8MB for each one.

root@Ubuntu-1604-xenial-64-minimal ~ # zfs list
NAME                   USED    AVAIL REFER MOUNTPOINT
lxd                    321M    13,2G   19K none
lxd/containers          17,6M  13,2G   19K none
lxd/containers/c1        5,85M 13,2G  303M /var/lib/lxd/containers/c1.zfs
lxd/containers/c2        5,85M 13,2G  303M /var/lib/lxd/containers/c2.zfs
lxd/containers/c3        5,84M 13,2G  303M /var/lib/lxd/containers/c3.zfs
lxd/images             303M    13,2G   19K none
lxd/images/069b9..2cf1 303M    13,2G  303M /var/lib/lxd/images/069b9..2cf1.zfs
root@Ubuntu-1604-xenial-64-minimal ~ #

 

Conclusion

You have seen how to repartition the disk of a VPS at Hetzner. You would normally work with a 40GB or bigger disk, though the 20GB is sufficient for a few websites.

Hetzner has a small difficulty level in repartitioning the VPS disk. The best (no difficulty) is Linode, which lets you repartition from the management screens. Other VPS companies make it quite difficult and you need to use the command line all the way.

With network storage (often called Block Storage), it is possible to add as much as needed additional storage to a VPS. Most companies are adding such facilities, though it is not universal yet.

Permanent link to this article: https://blog.simos.info/how-to-repartition-a-hetzner-vps-disk-for-zfs-on-its-own-partition-for-lxd/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.