Update #3 – 27 January 2020: Fedora requires special instructions for routed to work. See the section below named The routed profile for Fedora for more.
UPDATE #2 – 26 January 2020: Debian requires special instructions for routed to work. See the section below named The routed profile for Debian for more.
UPDATE #1 – 11 August 2020: Ubuntu 20.04 as a container did not work with the instructions in this post. There is a need for an addition to the profile in order to make it work for both older versions of Ubuntu and newer versions (Ubuntu 18.04 and Ubuntu 20.04). The addition is for on-link: true
in the cloud-init section in the LXD profile. This post has been updated to cover the addition of on-link: true
.
You are using LXD containers and you want a container (or more) to get an IP address from the LAN (or, get an IP address just like the host does).
LXD currently supports four ways to do that, and depending on your needs, you select the appropriate way.
- Using
macvlan
. See https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/ - Using
bridged
. See https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/ - Using
routed
. It is this post, read on. - Using
ipvlan
. This tutorial is pending.
For more on the routed
network option, see the LXD documentation on routed
and this routed
thread on the LXD discussion forum.
Why use the routed
network?
You would use the routed
network if you want to expose containers to the local network (LAN, or the Internet if you are using an Internet server, and have allocated several public IPs).
Any containers with routed
will appear on the network to have the MAC address of the host. Therefore, this will work even when you use it on your laptop that is connected to the network over WiFi (or any router with port security). That is, you can use routed
when macvlan
and bridged
cannot work.
You have to use static network configuration for these containers. Which means,
- You need to make sure that the IP address on the network that you give to the
routed
container, will not be assigned by the router in the future. Otherwise, there will be an IP conflict. You can do so if you go into the configuration of the router, and specify that the IP address is in use. - The container (i.e. the services running in the container) should not be performing changes to the network interface, as it may mess up the setup.
Requirements for Ubuntu containers
The default network configuration in Ubuntu 18.04 or newer is to use netplan
and get eth0
to use DHCP for the configuration. The way netplan
does this, messes up with routed
, so we are using a workaround. This workaround is required only for the Ubuntu container images. Other distributions like CentOS do not require it. The workaround is based on cloud-init
, so it is the whole section for cloud-init
in the profile below.
The routed
LXD profile
Here is the routed
profile, which has been tested on Ubuntu. Create a profile with this name. Then, for each container that uses the routed
network, we will be creating a new individual profile based on this initial profile. The reason why we create such individual profiles, is that we need to hard-code the IP address in them. Below, in bold, you can see the values that can be changes, specifically, the IP address (in two locations, replace with your own public IP addresses), the parent interface (on the host), and the nameserver IP address (that one is a public DNS server from Google). You can create an empty profile, then edit it and replace the existing content with the following (lxc profile create routed
, lxc profile edit routed
).
config: user.network-config: | version: 2 ethernets: eth0: addresses: - 192.168.1.200/32 nameservers: addresses: - 8.8.8.8 search: [] routes: - to: 0.0.0.0/0 via: 169.254.0.1 on-link: true description: Default LXD profile devices: eth0: ipv4.address: 192.168.1.200 nictype: routed parent: enp6s0 type: nic name: routed_192.168.1.200 used_by:
We are going to make copies of the routed
profile to individual new ones, one for each IP address. Therefore, let’s create the LXD profiles for 192.168.1.200
and 192.168.1.201
. When you edit them
$ lxc profile copy routed routed_192.168.1.200
$ EDITOR=nano lxc profile edit routed_192.168.1.200
$ lxc profile copy routed routed_192.168.1.201
$ EDITOR=nano lxc profile edit routed_192.168.1.201
We are ready to test the profiles.
(alternative) The routed LXD profile for Debian
The following is an alternative LXD routed
profile that can be used on Debian and was created by tomp. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Debian, please report it below so that I can update the post. It explicitly makes the container not to set network configuration through DHCP. It further uses cloud-init
instructions to manually create a /etc/resolv.conf
because without DHCP there wouldn’t be such a file in the container. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through routed
).
config:
user.network-config: |
#cloud-config
version: 2
ethernets:
eth0:
dhcp4: false
dhcp6: false
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true
user.user-data: |
#cloud-config
bootcmd:
- echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
- systemctl restart resolvconf
description: Default LXD profile
devices:
eth0:
ipv4.address: 192.168.1.201
name: eth0
nictype: routed
parent: enp3s0
type: nic
name: routed_debian
(alternative) The routed LXD profile for Fedora
The following is an alternative LXD routed
profile that can be used on Fedora and was created by tomp. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Fedora, please report it below so that I can update the post. The profile has two sections; the cloud-init
section that configures once the networking in the container using NetworkManager, and the LXD network configuration that directs LXD on how to setup the routed
networking on the host. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like with other free public DNS servers. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through routed
).
Note that you would launch the container with a command line
lxc launch images:fedora/33/cloud fedora --profile default --profile routed_for_fedora
config:
user.user-data: |
#cloud-config
bootcmd:
- nmcli connection modify "System eth0" ipv4.addresses 192.168.1.201/32
- nmcli connection modify "System eth0" ipv4.gateway 169.254.0.1
- nmcli connection modify "System eth0" ipv4.dns 8.8.8.8
- nmcli connection modify "System eth0" ipv4.method manual
- nmcli connection down "System eth0"
- nmcli connection up "System eth0"
description: Default LXD profile
devices:
eth0:
ipv4.address: 192.168.1.201
name: eth0
nictype: routed
parent: enp3s0
type: nic
name: routed_for_fedora
Using the routed
network in LXD
We create a container called myrouted using the default
profile and on top of that the routed_192.168.1.200
profile.
$ lxc launch ubuntu:18.04 myrouted --profile default --profile routed_192.168.1.200 Creating myrouted Starting myrouted $ lxc list -c ns4t +------+---------+----------------------+-----------+ | NAME | STATE | IPV4 | TYPE | +------+---------+----------------------+-----------+ | myr..| RUNNING | 192.168.1.200 (eth0) | CONTAINER | +------+---------+----------------------+-----------+ $
According to LXD, the container has configured its IP address that was packaged into the cloud-init configuration.
Get a shell into the container and ping
- your host
- your router
- an Internet host such as
www.google.com
.
All of the above should work. Finally, ping from the host to the IP address of the container. It should work as well.
Conclusion
You have configured routed
in LXD so that one or more containers can get IP addresses from the network. Using a profile helps to automate the process. Still, if you want to setup manually, see the references above for instructions.
53 comments
2 pings
Skip to comment form
lmao Im pretty sure your blog is far more useful and comprehensive than the LXD documentation itself
This does not work with Ubuntu 20.04. While it does work with Ubuntu 18.04 but is very picky or sensitive meaning that I do get the required IP but cannot ping. Once I delete the container and launch again with a different routed profile (with different IP). I have not tested further than ping on 18.04.
By the way I am using archlinux based Manjaro Linux as host and ubuntu 20.04 as lxc container. but then it worked with ubuntu 18.04 even though a bit unpredictable.
After some digging. I found that in Ubuntu 20.04 something has changed. Here are the results of relevant network commands after creating and launching the 20.04 and 18.04 containers:
Ubuntu 20.04:
ip route
default via 192.168.0.1 dev wlp2s0 proto static metric 600
10.10.10.0/24 dev lxdbr0 proto kernel scope link src 10.10.10.1 linkdown
192.168.0.0/24 dev wlp2s0 proto kernel scope link src 192.168.0.2 metric 600
192.168.0.51 dev vethe1bcc11c scope link
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
ip neigh show proxy
169.254.0.1 dev vethe1bcc11c proxy
192.168.0.51 dev wlp2s0 proxy
lxc exec focalheadless-1 ip r
No on Ubuntu 18.04
ip route
default via 192.168.0.1 dev wlp2s0 proto static metric 600
10.10.10.0/24 dev lxdbr0 proto kernel scope link src 10.10.10.1 linkdown
192.168.0.0/24 dev wlp2s0 proto kernel scope link src 192.168.0.2 metric 600
192.168.0.51 dev veth8b310893 scope link
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
ip neigh show proxy
169.254.0.1 dev veth8b310893 proxy
192.168.0.51 dev wlp2s0 proxy
lxc exec bionic-1 ip r
default via 169.254.0.1 dev eth0 ----> we receive result here in side the container
Also you may notice a slight difference in vethe1bcc11c and veth8b310893. Though I am not sure about this.
For years I’m been coming to your blog for guidance on this LXD exposing to LAN issue. Thank you so much
Author
Thank you for your kind words!
I am new to lxc and I am trying to setup a routed profile for an lxc container in order to make the container reachable from the host as well as other servers on the local network over a wifi interface.
I am following the steps in this page and although the profile created successfully, when I add it to the lxc container and restart the container, no IP address is ever assigned to the container.
Where would I need to look for logs to get an idea of where the setup is going wrong?
I am trying this on Ubuntu 20.04 using lxc version 4.0.2.
Thanks in advance,
PM
Author
Hi!
You mention the version 4.0.2. Do you use the
lxd
snap package and are tracking the4.0/stable
channel?If instead you are using LXC (see https://blog.simos.info/comparison-between-lxc-and-lxd/), then this post does not apply there.
You can show the full LXD profile for the routed network, plus the command you use to launch the LXD container.
You can view any container errors by running the command
lxc info --show-log myrouted
.Hello Simos,
Sorry, yes. I am using lxd snap package 4.0/stable.
Profile for routed network:
Command to add the created routed network profile:
Command to launch the LXD container:
Command to show container log:
Log:
lxc test-server 20200809165847.736 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.test-server"
lxc test-server 20200809165847.738 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.test-server"
lxc test-server 20200809165847.741 WARN cgfsng - cgroups/cgfsng.c:fchowmodat:1573 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
Command showing container list:
Command showing profiles used:
There isn’t much shown in the log in terms of networking errors as far as I can see.
Thanks in advance,
PM
Author
The difference that I notice, is that in the profile the via: has a different IP address from the one I give above. The 169.x.x.x that I give in the post, is a workaround around an issue in the configuration of the Linux distribution in the container. Perhaps the 192.168.1.1 is a valid IP address, hence it causes an issue.
When I first configured the routed profile I wasn’t clear on the meaning of the address
169.254.0.1
so I was incorrectly guessing it maybe needed to be my network’s gateway address. I have since corrected it and now it is set back tovia: 169.254.0.1
. I am learning about networking in general at the same time as I am learning all about lxc.Author
I have updated this post with information about the issue with Ubuntu 20.04 LTS in the
routed
_ container. There is a need for an additional line in the LXD profile to make it work with Ubuntu 20.04 LTS. Requires the addition of the lineon-link: true
in the LXD profile.@simos
In your routed config there is a line:
via: 169.254.0.1
But that IP is not highlighted and nowhere does it say what it is or where its from?
Thanks for any info
Brian Mullan
@simos
Never mind I searched and found out 169.254.0.1 gets assigned by lxd for the routed inteface
Simos
I am on Ubuntu 20.04 system with my Wifi Interface on Host = wlp3s0
whose IP is 192.168.1.81
I tried your Profile but changed the Parent to wlp3s0
= = = = = = =
config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
– 192.168.1.200/32
nameservers:
addresses:
– 8.8.8.8
search: []
routes:
– to: 0.0.0.0/0
via: 169.254.0.1
description: Default LXD profile
devices:
eth0:
ipv4.address: 192.168.1.200
nictype: routed
parent: wlp3s0
type: nic
name: routed
used_by:
= = = = = = =
Created a test Ubuntu 20.04 container naamed “test”
$ lxc launch ubuntu:18.04 test –profile default –profile routed_200
$ lxc list
shows Container test has IP address 192.168.1.200
From HOST I can PING container’s IP 192.168.1.200
From Container TEST I can Ping my Host: 192.168.1.81
From Container TEST I can Ping my Router: 192.168.1.1
From my Container I CANNOT Ping http://www.google.com or any other Internet site or IP.
Any ideas?
Thanks
Brian Mullan
Author
I also have tested this post on Ubuntu 18.04 with a WiFi interface. And it worked for me.
Have a look at
/etc/resolv.conf
in the container. It should mention the stub resolver IP address, 127.0.0.53.Then, run
systemd-resolve --status
and verify there that the actual DNS server is 8.8.8.8. Should look likeAuthor
Apparently, the failure to work was related to newer container images, based on Ubuntu 20.04 LTS. The host being 20.04 LTS does not appear to affect here (correct me if I am wrong).
I have updated the post per discussion on DLO (discuss.linuxcontainers.org) that Ubuntu 20.04 (in container) requires
on-link: true
.@simos, @bmullan,
Having re-read this blog and having read bmullan’s comments, I have now been able to get the container networked over wifi using the routed profile. I can list two things that I have changed that made things work but, I don’t know what it is in particular about each one of them that was causing the issues beforehand.
Issue 1 – My container existed before creating the routed profile so, after creating the routed profile, I was simply stopping the container (lxc stop test-server) adding the routed profile to the container (lxc profile add test-server testRoutedProfile) and thereafter starting the container (lxc start test-server). This was causing the container to never pick up the routed profile’s configuration for some reason so, I then stopped and deleted the container altogether (lxc stop test-server, lxc delete test-server) and I then recreated it using the command included in this blog page (lxc launch ubuntu:20.04 test-server –profile default –profile testRoutedProfile). Using the launch command, the container then did get the IP address assigned but, exec-ing into the container I could still not ping the host from the container. But it was progress at least…
Issue 2 – as bmullan pointed out, he mentioned something not quite right with his setup using ubuntu 20.04 so I followed the advice given and I recreated the container using ubuntu 18.04 instead (lxc launch ubuntu:18.04 test-server –profile default –profile testRoutedProfile). I can confirm that with ubuntu 18.04 the networking is setup correctly and I can now ping the host, my gateway and the web (e.g. http://www.google.com) from inside the container. Any ideas why ubuntu 18.04 container works with routed profile but not ubuntu 20.04?
Cheers,
PM
Author
Regarding the first issue: The configuration in the cloud-init section only runs when the container starts for the first time. Therefore, it is expected not to run when you apply the profile to an existing container. There are some instructions on how to reset a container/VM so that cloud-init will run again on the next restart.
Regarding the second issue: I launch a routed 20.04 container on Ubuntu 18.04, and got the same issue. There is no default route, hence the issue. The DNS server is configured properly, so if the default route where to get set, the DNS would work fine.
Still investigating this.
Author
I have updated this post with the fix for Ubuntu 20.04 LTS in the container. You would need to update your
routed
profile by adding a line shown in the post, then recreate the container.If you already have a container and you do not want to recreate it, you can edit
/etc/netplan/50-cloud-init.yaml
and add there the line. Finally, restart the container.Hello @simos,
updated the profiles to include
on-link: true
as you suggested and I can confirm that Ubuntu 20.04 containers using routed profiles are now networked as expected.Cheers,
PM
Author
@Ponder Muse: Thanks for verifying!
@Ponder Muse does your host use wifi or eth ?
@simos Please advise how this should be configured for ipv6? I tried the following, but I suspect the routes for ipv6 are not correct:
Hi @bmullan, only just reading this now. My LXC containers are all networked over wifi.
This blog is really helpful! And very well explained!
I just had one technical question to improve my understanding of how things work: in the documentation (https://linuxcontainers.org/lxd/docs/master/instances#nic-routed) it is mentioned that:
“It requires the following sysctls to be set:
If using IPv4 addresses:
net.ipv4.conf.<parent>.forwarding=1
”
But in the blog this is never set and still everything is working perfectly fine.
Author
In Ubuntu and derivatives, forwarding is enabled by default.
I did not see complaints on this, which probably means that it is a default setting in most mainstream Linux distributions.
Hello, first of all, great blog I am learning how to use LXD from your posts.
I am trying to follow your approach and so far it works on my rpi 4 using the wired interface, but it is not working at all over WIFI, I have modified the profile as you pointed using the on-link but does not make any difference, in my case I can’t even ping 8.8.8.8 inside the container, my profile is just like this
Also checked the post on the linux container forum where they ask if there is any firewall running, not in my case, any idea what could be wrong?
I tried with routed network it worked but was not able to internet from container (like ping google.com) but other hosts are reachable from the container
My host machine is not a VM, it is a bare metal box and it is running with ubuntu 20.04
My profile config
Author
This is an issue with the name resolutions (DNS configuration) in the container.
Can you verify that you have tried the above with the container images
ubuntu:18.04
orubuntu:20.04
?There are more container images in the
images:
repository, including Ubuntu images. If you have selected a Debian or Fedora container image, you would need to look above in this post the separate LXD profiles for them.You can
ping
the Internet (the IP address for1.1.1.1
) but not hostnames (DNS issues). Thecloud-config
section above has a setting to enable DNS for you, and use one of the public DNS servers (8.8.8.8 is provided by Google).First, can you run the following command in the container? It will try to make a name resolution using specifically the Google DNS server. If that works, then DNS resolutions are not blocked somehow. You should get the IP address of http://www.google.com in the output, both IPv4 and IPv6.
Second, if the above works, verify that the cloud-init information managed to get parsed correctly, and can be found in the container.
Third, verify that the network manager (in the case of Ubuntu it is NetworkManager) is aware of the DNS server. The last lines mention the public Google DNS server,
8.8.8.8
.I am using ubuntu:20.04 container, Let me try other steps explained above, tx Simos once again.
Tried these steps
Author
You get the following, which does not make sense. But earlier, you could
ping 1.1.1.1
.Can you try to ping to 8.8.8.8?
If you can ping to
8.8.8.8
but you cannot make name resolutions, then there is some weird filtering between you and Google’s public DNS server. You can try with another, like1.1.1.1
. You need to be able to make name resolutions as above with one of those name servers. If your ISP is being weird and only allows to make name resolutions with their own name server, then find which one it is and use it instead.root@u1:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=880 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=85.3 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=28.6 ms
root@u1:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=44.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=66.7 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=116 time=88.7 ms
Author
Try then to run
host
on1.1.1.1
as well. As inhost www.google.com 1.1.1.1
.If you still get an error that the name server does not answer name resolutions, then you can use
tshark
ortcpdump
to figure out whether you get any ICMP replies at all. You can runtshark
ortcpdump
in the container and also on the host.Also, on the host, try with
host www.google.com 8.8.8.8
. If you still get an error that no servers could be reached, then your ISP is blocking name resolutions.Not worked for me. Container can’t be started at all with such profile:
Name: net-test-01
Location: none
Remote: unix://
Architecture: x86_64
Created: 2021/04/03 21:41 UTC
Status: Stopped
Type: container
Profiles: default, net-01-ramesses
Log:
lxc net-test-01 20210403232713.311 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.net-test-01"
lxc net-test-01 20210403232713.312 WARN cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1129 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.net-test-01"
lxc net-test-01 20210403232713.317 ERROR network - network.c:lxc_setup_l2proxy:2924 - File exists - Failed to add ipv4 dest "192.168.1.200" for network device "lo"
lxc net-test-01 20210403232713.317 ERROR network - network.c:lxc_create_network_priv:3064 - File exists - Failed to setup l2proxy
lxc net-test-01 20210403232713.317 ERROR start - start.c:lxc_spawn:1786 - Failed to create the network
lxc net-test-01 20210403232713.317 ERROR lxccontainer - lxccontainer.c:wait_on_daemonized_start:860 - Received container state "ABORTING" instead of "RUNNING"
lxc net-test-01 20210403232713.318 ERROR start - start.c:__lxc_start:1999 - Failed to spawn container "net-test-01"
lxc net-test-01 20210403232713.318 WARN start - start.c:lxc_abort:1013 - No such process - Failed to send SIGKILL via pidfd 31 for process 3348791
lxc 20210403232713.795 WARN commands - commands.c:lxc_cmd_rsp_recv:126 - Connection reset by peer - Failed to receive response for command "get_state"
Author
I got this error as well on one of my computers. It talks about the loopback interface (
lo
) and giving it the IP address192.168.1.200
. This is weird.In my case I had a bridged interface on the host and was trying to use
routed
on this bridged interface. You need to check your host’s network configuration.Tried Debian and Fedora. They also didn’t work.
unlike Ubuntu 20.04, Debian 10 and Fedora 33 at least booted, but got “network unreachable”.
Can you help me with this problem? Your ipvlan tutorial works fine, but I have this problem with routed network 🙁
Author
It talks here about an openvswitch bridge
wlan0
. Do you use openvswitch at all? (if not, then it’s a bug in LXD).Truth be told, this is my first time hearing about openvswitch. If I use it, I don’t know about it. At least:
Author
First of all, I see that you are running on the
aarch64
architecture and the host’s main interface iswlan0
. Beingwlan0
(and not some name likewlx00112233..
) it means that you do not have Ubuntu. Is that the case?I mention above about Ubuntu as a way to deduce whether you have installed the snap package of LXD or not.
Also, the IP address you try to set,
192.168.100.200
, implies thatwlan0
has an IP address of the sort192.168.100.x
. Is that the case? If not, it would complain, but the error would not talk about openvswitch.For some reason I can’t see my comment, so I’ll post it again. If this turns out to be a duplicate, you can delete it.
Comment:
Oh, sorry, the main interface is actually wlxXXXXXXXXXXXX (I copied the logs to a shared file, where I accidentally replaced it with wlan0).
The profile I provided above actually contains the wlx… interface, not wlan0.
However, with regard to IP, everything is correct: in my local network, devices have addresses starting with 192.168.100.xxx
DHCP on my router works in the range 192.168.100.20 – 192.168.100.199 .
However, if it matters, my system also has a wlan0 interface, but it is disabled (state DOWN). wlan0 is a WiFi module built into the device.
The wlx… interface on my system is an external USB device.
The host device itself is a Raspberry Pi 4 Model B (8 GB).
The operating system installed on it is Ubuntu 21.04 Server (64 bit).
Author
Indeed there was a comment that was held back by the filter of WordPress.
Since you are running the snap package of
lxd
, then those openvswitch programs come from within the LXD package.That is, even if you have not installed any openvswitch programs yourself, the LXD package has them available for the LXD server,
if you happen to make a LXD configuration with openvswitch.
When you run
snap info lxd
, you will get the following. These are the publicly exposed commands for LXD. As you can see, these are LXD commands....
commands:
- lxd.benchmark
- lxd.buginfo
- lxd.check-kernel
- lxd.lxc
- lxd.lxc-to-lxd
- lxd
- lxd.migrate
services:
lxd.activate: oneshot, enabled, inactive
lxd.daemon: simple, enabled, active
...
The same executables can be found
/snap/bin/
.But if you go deeper into the
bin
directory of LXD, you will see many more programs,/snap/lxd/current/bin/
.Anything you see there, can be used by the LXD programs.
How can we get a view of what the LXD programs can see? With
snap run
. Use the following. You get a shell into the innards of LXD, and can look around.$ snap run --shell lxd
bash-4.4$ ovsdb-client list-dbs
ovsdb-client: failed to connect to "unix:/var/run/openvswitch/db.sock" (No such file or directory)
This means that there is no configuration for openvswitch. You have not been using openvswitch, and your current task does not require openvswitch.
Therefore, the message about openvswitch missing, is weird, and most likely some bug.
You can report this issue at https://github.com/lxc/lxd/issues
Mention that you are trying to use
routed
networking, and it fails with an error message aboutopenvswitch
.Provide the exact command outputs, and remove any privacy-related information.
Thank you for such a detailed explanation! Everything (about snap/LXD) is exactly as you said.
Regarding the problem itself: it turned out to be a Raspberry Pi kernel config problem.
https://github.com/lxc/lxd/issues/8735#issuecomment-831968950
Author
Thanks for writing the bug report and getting this solved.
Hi, thank you for the wonderful guides. I am having a hard time figuring out what option to use when configuring the networking for my lxd container.
My host machine has 2 physical network adapters: an ethernet and separate wireless one. My isp also provides two separate public IP addresses. My host is already set up in such a way that if I connect with the eth connection I get one IP, and if I connect with the wifi I get the other one.
I have an lxd container already set up and working perfectly with macvlan using your guide and the ethernet adapter.
I want a second container that uses the wifi adapter to get the second ip address. I tried using the bridge networking but it seems that I can’t because “Device does not allow enslaving to a bridge.”
Is routed the way to go? What about the “physical” nic type?
Thanks!
Author
Thanks!
From what I understand, your Internet connection is not one of ADSL/VDSL/FTTH which would require a router but something else. And that something else is likely your ISP providing you with an Ethernet port at your location, and having a WiFi near you with some authentication method. Is that the case? I am asking this because in most cases your ISP would give you a router with NAT networking (local network), and all your home devices would get a private IP address of the sort 192.168.x.y or 10.x.y.z.
When you use
macvlan
, the upstream router sees two (or more) MAC addresses. If you configured themacvlan
container to automatically get an IP address, then the network configuration for the container came from the ISP.The importance here with using bridge, macvlan, ipvlan, or routed is that you have access to additional IP addresses from the network. If you have an appropriate router (such as those that can use OpenWRT), then you can set it up to act as a Wireress router; the router itself would be a client to the ISP’s WiFi network and then would setup its own WiFi network for any clients.
Having said that, you can deal with the WiFi connection separately from the wired Internet setup.
You can use
nic=physical
so that the WiFi adapter will vanish from the host and appear in a specific container of your choosing. Only one container can use the WiFi adapter in such a setup.See more at https://linuxcontainers.org/lxd/docs/master/instances#nic-physical
See an example of
nic=physical
at https://blog.simos.info/using-the-lxd-kali-container-image/The reason why you cannot use
bridged
ormacvlan
on a WiFi interface is because WPA/WPA2 (i.e. any WiFi security) enforce a security feature that only a single MAC address can appear in such a connection. But with macvlan or bridged there would be out there the MAC address of the host and the container.Thank you for the in depth reply Simos! My apologies for not seeing it earlier.
My setup is a bit weird, so I’ll try to make it more clear for those who might stumble upon this post later. My goal was to use the 2 public IP addresses provided by my ISP. I wanted two containers, each using a separate IP address.
I have a fibre-to-the-home connections. The optical fibre cable comes into my house and plugs into the Optical Network Terminal (ONT). This device has 4 ethernet ports. Usually the ISP provided router is plugged into the first port, and then I had my own router plugged into the ISP router, and then all my devices connect to that personal router via ethernet or wifi.
To get the second public IP address, I had to get a switch and plug this switch into the ONT. Then I plug the ISP router and my own router into two ports of the switch. This gives my personal router and the ISP router two separate IP addresses.
Ideally my computer would have two network cards and I’d connect each card to one router to get the containers on my machine access to the two separate IP addresses. Unfortunately I couldn’t do this because my computer is in a different room and I only have one cable leading to it.
I figured an alternate solution would be to use my motherboard’s onboard wifi to connect to the ISP router and the motherboard’s ethernet port to connect to my own router.
That’s what I was having trouble with: I think in the end I set up my second container with the bridged mode for the wifi. I think that I could of also used the physical type in this particular configuration.
Anyway, thank you again for the wonderful posts!
I have set up things on a Ubuntu 20.04 host per the above, and can ping the LXD container by IP but not by name. Likewise the containers can ping the host by IP but not by name. Ping to external sites by name works for both containers and the host. Is this expected behaviour?
What if my Ethernet device is named “eth0” (not enp6s0, for example)?
What should I change in my LXD profile?
I tried to change only “parent: enp6s0” to “parent: eth0” (in the suggested profile for Debian from the post) but something strange has happened: after the first ping test of the container from the host, the container lost its IPv4 and I was not able to ping it again.
My host has “Armbian 21.08.5 Focal” and my container has “Ubuntu 20.04”.
Impressive tutorials.
I have followed this tutorial in a ubuntu 18.04 server VM (in virtualbox windows), I could create debian and ubuntu containers easily using the profiles in this tutorial.
I recently have switched to kubuntu on my laptop. I have installed lxc 4.22.
I couldn’t get dns working in debian container. I can ping ip address but can’t ping domain names. So can’t update or install anything.
I’m connected using wireless (wlp2s0) with and adsl router (192.168.1.1) so 192.168.1.xxx are availables, I could use 192.168.1.200 to 210 while using it in VM.
Thanks a lot
This is and your similar post on LXD MACVLAN are proving to be amazingly helpful to me. I need to assign static internet IP’s to a handful of containers. For context, my Ubuntu server with LXD is running on a bare-metal in a datacenter. It seems like both MACVLAN and Routed could meet my needs, however I’d like to go for the simplest to manage and troubleshoot. There will be no wifi situation so virtual NIC’s with different MAC’s is not a concern. I’m imagining MACVLAN and Routed have similar overheads and latency, but if you think there is a difference there, please let me know.
[…] Using routed. See https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ […]
[…] Using routed. See https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/ […]