In previous posts we saw how to set up LXD on a DigitalOcean VPS, how to set up LXD on a Scaleway VPS, and how the lifecycle of an LXD container looks like.
In this post, we are going to
- Create multiple websites, each in a separate LXD container
- Install HAProxy as a TLS Termination Proxy, in an LXD container
- Configure HAProxy so that each website is only accessible through TLS
- Perform the SSL Server Test so that our websites really get the A+!
In this post, we are not going to install WordPress (or other CMS) on the websites. We keep this post simple as that is material for our next post.
The requirements are
- We have at least one domain in our ownership and we configure a few hostnames to resolve to the IP address of the new VPS. This is required in order to get those free TLS certificates from Let’s Encrypt.
Set up a VPS
We are using DigitalOcean in this example.
Ubuntu 16.04.1 LTS was released a few days ago and DigitalOcean changed the Ubuntu default to 16.04.1. This is nice.
We are trying out the smallest droplet in order to figure out how many websites we can squeeze in containers. That is, 512MB RAM on a single virtual CPU core, at only 20GB disk space!
In this example we are not using the new DigitalOcean block storage as at the moment it is available in only two datacentres.
Let’s click on the Create droplet button and the VPS is created!
Initial configuration
We are using DigitalOcean in this HowTo, and we have covered the initial configuration in this previous post.
Go through the post and perform the tasks described in section «Set up LXD on DigitalOcean».
Creating the containers
We create three containers for three websites, plus one container for HAProxy.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc init ubuntu:x web1 Creating web1 Retrieving image: 100% ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web2 Creating web2 real 0m6.620s user 0m0.016s sys 0m0.004s ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x web3 Creating web3 real 1m15.723s user 0m0.012s sys 0m0.020s ubuntu@ubuntu-512mb-ams3-01:~$ time lxc init ubuntu:x haproxy Creating haproxy real 0m48.747s user 0m0.012s sys 0m0.012s ubuntu@ubuntu-512mb-ams3-01:~$
Normally it takes a few seconds for a new container to initialize. Remember that we are squeezing here, it’s a 512MB VPS, and the ZFS pool is stored on a file (not a block device)! We are looking into the kernel messages of the VPS for lines similar to «Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child», which indicate that we reached the memory limit. While preparing this blog post, there were a couple of Out of memory kills, so I made sure that nothing critical was dying. If this is too much for you, you can select a 1GB RAM (or more) VPS and start over.
Let’s start the containers up!
ubuntu@ubuntu-512mb-ams3-01:~$ lxc start web1 web2 web3 haproxy ubuntu@ubuntu-512mb-ams3-01:~$ lxc list +---------+---------+-----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------+---------+-----------------------+------+------------+-----------+ | haproxy | RUNNING | 10.234.150.39 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web1 | RUNNING | 10.234.150.169 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web2 | RUNNING | 10.234.150.119 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web3 | RUNNING | 10.234.150.51 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ ubuntu@ubuntu-512mb-ams3-01:~$
You may need to run lxc list a few times until you make sure that all containers got an IP address. That means that they all completed their startup.
DNS configuration
The public IP address of this specific VPS is 188.166.10.229. For this test, I am using the domain ubuntugreece.xyz as follows:
- Container web1: ubuntugreece.xyz and www.ubuntugreece.xyz have IP 188.166.10.229
- Container web2: web2.ubuntugreece.xyz has IP 188.166.10.229
- Container web3: web3.ubuntugreece.xyz has IP 188.166.10.229
Here is how it looks when configured on a DNS management console,
From here and forward, it is a waiting game until these DNS configurations are propagated to the rest of the Internet. We need to wait until those hostnames resolve into their IP address.
ubuntu@ubuntu-512mb-ams3-01:~$ host ubuntugreece.xyz ubuntugreece.xyz has address 188.166.10.229 ubuntu@ubuntu-512mb-ams3-01:~$ host web2.ubuntugreece.xyz Host web2.ubuntugreece.xyz not found: 3(NXDOMAIN) ubuntu@ubuntu-512mb-ams3-01:~$ host web3.ubuntugreece.xyz web3.ubuntugreece.xyz has address 188.166.10.229 ubuntu@ubuntu-512mb-ams3-01:~$
These are the results after ten minutes. ubuntugreece.xyz and web3.ubuntugreece.xyz are resolving fine, while web2.ubuntugreece.xyz needs a bit more time.
We can continue! (and ignore for now web2)
Web server configuration
Let’s see the configuration for web1. You must repeat the following for web2 and web3.
We install the nginx web server,
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec web1 — /bin/bash
root@web1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
…
3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
root@web1:~# apt upgrade
Reading package lists… Done
…
Processing triggers for initramfs-tools (0.122ubuntu8.1) …
root@web1:~# apt install nginx
Reading package lists… Done
…
Processing triggers for ufw (0.35-0ubuntu2) …
root@web1:~#
nginx needs to be configured so that it understands the domain name for web1. Here is the diff,
diff --git a/etc/nginx/sites-available/default b/etc/nginx/sites-available/default index a761605..b2cea8f 100644 --- a/etc/nginx/sites-available/default +++ b/etc/nginx/sites-available/default @@ -38,7 +38,7 @@ server { # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; - server_name _; + server_name ubuntugreece.xyz www.ubuntugreece.xyz; location / { # First attempt to serve request as file, then
and finally we restart nginx and exit the web1 container,
root@web1:/etc/nginx/sites-enabled# systemctl restart nginx root@web1:/etc/nginx/sites-enabled# exit exit ubuntu@ubuntu-512mb-ams3-01:~$
Forwarding connections to the HAProxy container
We are about the set up the HAProxy container. Let’s add iptables rules to perform the forwarding of connections to ports 80 and 443 on the VPS, to the HAProxy container.
ubuntu@ubuntu-512mb-ams3-01:~$ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 04:01:36:50:00:01 inet addr:188.166.10.229 Bcast:188.166.63.255 Mask:255.255.192.0 inet6 addr: fe80::601:36ff:fe50:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:40513 errors:0 dropped:0 overruns:0 frame:0 TX packets:26362 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:360767509 (360.7 MB) TX bytes:3863846 (3.8 MB) ubuntu@ubuntu-512mb-ams3-01:~$ lxc list +---------+---------+-----------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +---------+---------+-----------------------+------+------------+-----------+ | haproxy | RUNNING | 10.234.150.39 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web1 | RUNNING | 10.234.150.169 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web2 | RUNNING | 10.234.150.119 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+ | web3 | RUNNING | 10.234.150.51 (eth0) | | PERSISTENT | 0 | +---------+---------+-----------------------+------+------------+-----------+
ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 80 -j DNAT --to-destination 10.234.150.39:80 [sudo] password for ubuntu: ubuntu@ubuntu-512mb-ams3-01:~$ sudo iptables -t nat -I PREROUTING -i eth0 -p TCP -d 188.166.10.229/32 --dport 443 -j DNAT --to-destination 10.234.150.39:443 ubuntu@ubuntu-512mb-ams3-01:~$
If you want to make those changes permanent, see Saving Iptables Firewall Rules Permanently (the part about the package iptables-persistent).
HAProxy initial configuration
Let’s see how to configure HAProxy in container haproxy. We enter the container, update the software and install the haproxy package.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy -- /bin/bash root@haproxy:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease ... 3 packages can be upgraded. Run 'apt list --upgradable' to see them. root@haproxy:~# apt upgrade Reading package lists... Done ... Processing triggers for initramfs-tools (0.122ubuntu8.1) ... root@haproxy:~# apt install haproxy Reading package lists... Done ... Processing triggers for ureadahead (0.100.0-19) ... root@haproxy:~#
We add the following configuration to /etc/haproxy/haproxy.conf. Initially, we do not have any certificates for TLS, but we need the Web servers to work with plain HTTP in order for Let’s Encrypt to be able to verify we own the websites. Therefore, here is the complete configuration, with two lines commented out (they start with ###) so that HTTP can work. As soon as we deal with Let’s Encrypt, we go full TLS (by uncommenting the two lines that start with ###) and never look back. We mention when to uncomment later in the post.
diff --git a/etc/haproxy/haproxy.cfg b/etc/haproxy/haproxy.cfg index 86da67d..f6f2577 100644 --- a/etc/haproxy/haproxy.cfg +++ b/etc/haproxy/haproxy.cfg @@ -18,11 +18,17 @@ global ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 + # Minimum DH ephemeral key size. Otherwise, this size would drop to 1024. + # @link: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.ssl.default-dh-param + tune.ssl.default-dh-param 2048 + defaults log global mode http option httplog option dontlognull + option forwardfor + option http-server-close timeout connect 5000 timeout client 50000 timeout server 50000 @@ -33,3 +39,56 @@ defaults errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http + +# Configuration of the frontend (HAProxy as a TLS Termination Proxy) +frontend www_frontend + # We bind on port 80 (http) but (see below) get HAProxy to force-switch to HTTPS. + bind *:80 + # We bind on port 443 (https) and specify a directory with the certificates. +#### bind *:443 ssl crt /etc/haproxy/certs/ + # We get HAProxy to force-switch to HTTPS, if the connection was just HTTP. +#### redirect scheme https if !{ ssl_fc } + # TLS terminates at HAProxy, the container runs in plain HTTP. Here, HAProxy informs nginx + # that there was a TLS Termination Proxy. Required for WordPress and other CMS. + reqadd X-Forwarded-Proto:\ https + + # Distinguish between secure and insecure requestsa (used in next two lines) + acl secure dst_port eq 443 + + # Mark all cookies as secure if sent over SSL + rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure + + # Add the HSTS header with a 1 year max-age + rspadd Strict-Transport-Security:\ max-age=31536000 if secure + + # Configuration for each virtual host (uses Server Name Indication, SNI) + acl host_ubuntugreece_xyz hdr(host) -i ubuntugreece.xyz www.ubuntugreece.xyz + acl host_web2_ubuntugreece_xyz hdr(host) -i web2.ubuntugreece.xyz + acl host_web3_ubuntugreece_xyz hdr(host) -i web3.ubuntugreece.xyz + + # Directing the connection to the correct LXD container + use_backend web1_cluster if host_ubuntugreece_xyz + use_backend web2_cluster if host_web2_ubuntugreece_xyz + use_backend web3_cluster if host_web3_ubuntugreece_xyz + +# Configuration of the backend (HAProxy as a TLS Termination Proxy) +backend web1_cluster + balance leastconn + # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP. + http-request set-header X-Client-IP %[src] + # This backend, named here "web1", directs to container "web1.lxd" (hostname). + server web1 web1.lxd:80 check + +backend web2_cluster + balance leastconn + # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP. + http-request set-header X-Client-IP %[src] + # This backend, named here "web2", directs to container "web2.lxd" (hostname). + server web2 web2.lxd:80 check + +backend web3_cluster + balance leastconn + # We set the X-Client-IP HTTP header. This is usefull if we want the web server to know the real client IP. + http-request set-header X-Client-IP %[src] + # This backend, named here "web3", directs to container "web3.lxd" (hostname). + server web3 web3.lxd:80 check
Let’s restart HAProxy. If you get any errors, run systemctl status haproxy and try to figure out what went wrong.
root@haproxy:~# systemctl restart haproxy root@haproxy:~# exit ubuntu@ubuntu-512mb-ams3-01:~$
Does it work? Let’s visit the website,
It’s is working! Let’s Encrypt will be able to access and verify that we own the domain in the next step.
Get certificates from Let’s Encrypt
We exit out to the VPS and install letsencrypt.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo apt install letsencrypt [sudo] password for ubuntu: Reading package lists... Done ... Setting up python-pyicu (1.9.2-2build1) ... ubuntu@ubuntu-512mb-ams3-01:~$
We run letsencrypt three times, one for each website. update It is also possible to simplify the following by using multiple domain (or Subject Alternative Names (SAN)) certificates. Thanks for @jack who mentioned this in the comments.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web1/rootfs/var/www/html -d ubuntugreece.xyz -d www.ubuntugreece.xyz ... they ask for a contact e-mail address and whether we accept the Terms of Service... IMPORTANT NOTES: - If you lose your account credentials, you can recover through e-mails sent to xxxxx@gmail.com. - Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/ubuntugreece.xyz/fullchain.pem. Your cert will expire on 2016-10-21. To obtain a new version of the certificate in the future, simply run Let's Encrypt again. - Your account credentials have been saved in your Let's Encrypt configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Let's Encrypt so making regular backups of this folder is ideal. - If you like Let's Encrypt, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le ubuntu@ubuntu-512mb-ams3-01:~$
For completeness, here are the command lines for the other two websites,
ubuntu@ubuntu-512mb-ams3-01:~$ sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web2/rootfs/var/www/html -d web2.ubuntugreece.xyz IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/web2.ubuntugreece.xyz/fullchain.pem. Your cert will expire on 2016-10-21. To obtain a new version of the certificate in the future, simply run Let's Encrypt again. - If you like Let's Encrypt, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le ubuntu@ubuntu-512mb-ams3-01:~$ time sudo letsencrypt certonly --authenticator webroot --webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d web3.ubuntugreece.xyz IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/web3.ubuntugreece.xyz/fullchain.pem. Your cert will expire on 2016-10-21. To obtain a new version of the certificate in the future, simply run Let's Encrypt again. - If you like Let's Encrypt, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le real 0m18.458s user 0m0.852s sys 0m0.172s ubuntu@ubuntu-512mb-ams3-01:~$
Yeah, it takes only around twenty seconds to get your Let’s Encrypt certificate!
We got the certificates, now we need to prepare them so that HAProxy (our TLS Termination Proxy) can make use of them. We just need to join together the certificate chain and the private key for each certificate, and place them in the haproxy container at the appropriate directory.
ubuntu@ubuntu-512mb-ams3-01:~$ sudo mkdir /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/ ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem' ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web2.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem' ubuntu@ubuntu-512mb-ams3-01:~$ DOMAIN='web3.ubuntugreece.xyz' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /var/lib/lxd/containers/haproxy/rootfs/etc/haproxy/certs/$DOMAIN.pem' ubuntu@ubuntu-512mb-ams3-01:~$
HAProxy final configuration
We are almost there. We need to enter the haproxy container and uncomment those two lines (those that started with ###) that will enable HAProxy to work as a TLS Termination Proxy. Then, restart the haproxy service.
ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec haproxy bash root@haproxy:~# vi /etc/haproxy/haproxy.cfg root@haproxy:/etc/haproxy# systemctl restart haproxy root@haproxy:/etc/haproxy# exit ubuntu@ubuntu-512mb-ams3-01:~$
Let’s test them!
Here are the three websites, notice the padlocks on all three of them,
The SSL Server Report (Qualys)
Here are the SSL Server Reports for each website,
You can check the cached reports for LXD container web1, LXD container web2 and LXD container web3.
Results
The disk space requirements for those four containers (three static websites plus haproxy) are
ubuntu@ubuntu-512mb-ams3-01:~$ sudo zpool list [sudo] password for ubuntu: NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT mypool-lxd 14.9G 1.13G 13.7G - 4% 7% 1.00x ONLINE - ubuntu@ubuntu-512mb-ams3-01:~$
The four containers required a bit over 1GB of disk space.
The biggest concern has been the limited RAM memory of 512MB. The Out Of Memory (OOM) handler was invoked a few times during the first steps of container creation, but not afterwards during the launching of the nginx instances.
ubuntu@ubuntu-512mb-ams3-01:~$ dmesg | grep "Out of memory" [ 181.976117] Out of memory: Kill process 3829 (unsquashfs) score 524 or sacrifice child [ 183.792372] Out of memory: Kill process 3834 (unsquashfs) score 525 or sacrifice child [ 190.332834] Out of memory: Kill process 3831 (unsquashfs) score 525 or sacrifice child [ 848.834570] Out of memory: Kill process 6378 (localedef) score 134 or sacrifice child [ 860.833991] Out of memory: Kill process 6400 (localedef) score 143 or sacrifice child [ 878.837410] Out of memory: Kill process 6436 (localedef) score 151 or sacrifice child ubuntu@ubuntu-512mb-ams3-01:~$
There was an error while creating one of the containers in the beginning. I repeated the creation command and it completed successfully. That error was probably related to this unsquashfs kill.
Summary
We set up a $5 VPS (512MB RAM, 1CPU core and 20GB SSD disk) with Ubuntu 16.04.1 LTS, then configured LXD to handle containers.
We created three containers for three static websites, and an additional container for HAProxy to work as a TLS Termination Proxy.
We got certificates for those three websites, and verified that they all pass with A+ at the Qualys SSL Server Report.
The 512MB RAM VPS should be OK for a few low traffic websites, especially those generated by static site generators.
21 comments
2 pings
Skip to comment form
It looks like you have all three web servers on one machine. In this case it is not need to create three Let’s Encrypt certificates. You could create one certificate and have other domains in Certificate Alternative Name. Like:
sudo letsencrypt certonly –authenticator webroot –webroot-path=/var/lib/lxd/containers/web3/rootfs/var/www/html -d http://www.ubuntugreece.xyz -d web3.ubuntugreece.xyz -d web2.ubuntugreece.xyz
In above case one certificate is created which is valid for the following three domains:
http://www.ubuntugreece.xyz
web2.ubuntugreece.xyz
web3.ubuntugreece.xyz
Author
@jack:
Indeed, you can create a certificate with up to 100 domains with Let’s Encrypt.
My attempt has been to show the process of creating certificates with different domains; I did not have handy different domains so I resorted showing this with subdomains only.
@Simos, I don’t want to go too deep, because this certificate section was not main point of the article. But if someone wants it is possible to have one certificate for entirely different domains and my suggestion above is not restricted to subdomains.
Author
@jack, Thanks for persisting. I did not know that LetsEncrypt had that feature.
They use the terminology “multi-domain/multiple domain” certificates or “Subject Alternative Names (SAN)” certificates, the term you mentioned earlier. As in https://www.digicert.com/subject-alternative-name.htm
There is a reference for those “multi-domain” certificates at some community threads like https://community.letsencrypt.org/t/please-support-multi-domain-ssl-certificates-like-the-ones-on-positivessl/867/2
Apparently, they are easy to confuse with wildcard certificates.
The LetsEncrypt FAQ actually mentions them:
“Can I get a certificate for multiple domain names (SAN certificates)?
Yes, the same certificate can apply to several different names using the Subject Alternative Name (SAN) mechanism. The resulting certificates will be accepted by browsers for any of the domain names listed in them.”
Source: https://letsencrypt.org/docs/faq/
You could use cgroups (through lxc) to limit the resources each site has – this might help protect you from the OOMing on the host – or at least stop one container from taking down the server.
Alternatively see if you can add a large swap file?
Author
@Davis Goodwin:
In this specific installation, the OOM handler was invoked for processes (unsquashfs, localedef) running in the VPS and not in a container.
The containers did not run something memory-intensive like mysql, so once the containers were started, they were running fine for sharing static content.
I did not run any benchmarks for these web servers and it’s something that I should do for a future post.
I think it would be better to let them all share the VPS memory as needed, without restrictions, so that any spike in traffic to one site can be handled better (by using up more memory).
An alternative could be to try out another rootfs instead of “ubuntu:x” (something from “images:”). A few MB should be shaved off from each running container. It would be interesting to measure the memory usage in these scenarios.
The addition of a swap file should help. There is an article about that at https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04 with a note about deciding whether to actually use it.
Simon
Nice post & fyi I have added your post to the Reddit LXC subreddit.
Author
Thanks Brian!
Great guide!
Can you give us some pointers on how to set up to auto-renew the certs? I think they expire after 90 days.
Author
@drenright
You can use the “cron” facility to run the “letsencrypt renew” command and autorenew the certificates.
Here is an example, https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-16-04
Do note however that you also need to include the script that places the certificates into /etc/haproxy/certs/
Impressive write-up, along with the DO one! Why not just have haproxy run on the host? The security (or other) advantages gained by containerizing haproxy don’t seem obvious to me.
Would the host be able to resolve the .lxd domains? If not then that would be a big reason to definitely containerize haproxy…
Author
Indeed, one of the benefits of having *haproxy* as a container, is to avoid having to put IP addresses and use *.lxd hostnames. It is possible though to set up the host to see those *.lxd hostnames (set up dnsmasq in the host to also consult LXD’s dnsmasq).
The security advantage of having HAProxy in a container is that if something goes bad with HAProxy (buffer overflow, etc), then the risk is localised to the haproxy container. If HAProxy was in the host, then an attacker would have full access to everything.
For me, it is intuitive to put HAProxy in a container. It keeps the host clean, works, and allows us to easily switch between reverse proxies if we want to. We would create another container with some reverse proxy, and activate just by switching the iptables rule to the new proxy container.
Hi Simos, thanks again for this tutorial I will try this as soon as I can but..
This is a 2016 tutorial. Do you still advise to follow this?
I was also follow this one HAPROXY + LXD https://www.digitalocean.com/community/tutorials/how-to-host-multiple-web-sites-with-nginx-and-haproxy-using-lxd-on-ubuntu-16-04
So I think this is compatible.
But do you have additional advise if I should proceed with this or just use nginx revese proxyy instead of HAPROXY?
If so, do you have a tutorial to apply let’s encrypt for nginx reverse proxy (hosting multiplte sites with LXD)
Author
Hi John!
I suggest to use nginx as the reverse proxy instead. Because nginx has the facility to update the certificates automatically for you, and you do not need to do this task manually.
https://www.linode.com/docs/applications/containers/beginners-guide-to-lxd-reverse-proxy/
Of course, if you are more familiar, you can still use HAProxy, following the DO guide.
You could also use Caddy for the proxy. As far as ease of use and simplicity of configuration, hard to beat it. Certs get updated by default. Set and forget.
Author
Caddy is a good choice indeed. I suggest to use the snap package so that it gets updated as soon as a new version is made available.
Nice tutorial, but I’m confused about the diff and diff –git config files for ngnix and HAproxy. I’m not familiar with this. Where can I find documentation about diff and diff –git ?
Author
Thanks!
Have a look at https://git-scm.com/docs/git-diff and the example about the combined diff format (towards the end of that page).
A combined diff format starts with information about the file. Here, the file is
/etc/haproxy/haproxy.cfg
.Then, there are one or more lines that start with
@@
. Here is the first. This is the information about the differences in one fragment of the file, starting at line number 18.Here are the actual differences of the fragment. The lines that start with
+
are lines that are added; if it was-
, those would be removed. The rest of the lines (without+
or-
), are just context lines. Therefore, in the following, we are adding two sets of lines (not removing anything) at those specific locations.Having said that, if you would rather use
nginx
instead of LXD, I suggest to follow this guide, https://www.linode.com/docs/guides/beginners-guide-to-lxd-reverse-proxy/Thanks a lot, Simos! It’s clear now. Despite the fact that the initial tutorial was written in 2016, it’s still working, even for my actual setup of Ubuntu 20.04 and HAproxy 2.0.13
I have another question concerning security. What is the best practice for installing the firewall? Since all traffic is forwarded to the lxc containers, it seems to me that the containers should be secured with a firewall. Is this correct?
The host is listening on ports 22, 80 and 443, and name resolutions and most likely no other ones.But check with
lsof
andss
to make sure.If something could get compromised, it would likely be a Web server container. Still, an attacker would be confined to the container, unable to escape to the host. They would try to move to other containers; there are no known escapes therefore an attacker would look for configuration issues to force an escape.
You could setup a firewall on the host so that it does not accept incoming and outgoing packets for any other ports. You would need to balance the complexity of configuring appropriate rules and the inconvenience to an attacker to tunnel all network connections through the allowed ports.
[…] How To Set Up Multiple Secure (Ssl/tls, Qualys Ssl Labs A+) Websites Using Lxd Containers – Mi Blo… […]
[…] Author: indigodaddy Go to Link: https://blog.simos.info/how-to-set-up-multiple-secure-ssltls-qualys-ssl-labs-a-websites-using-lxd-co… […]