How to make your LXD containers get IP addresses from your LAN using a bridge

Background: LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions.

In this post, we are going to see how to use a bridge to make our containers get an IP address from  the local network. Specifically, we are going to see how to do this using NetworkManager. If you have several public IP addresses, you can use this method (or the other with the macvlan) in order to expose your LXD containers directly to the Internet.

Creating the bridge with NetworkManager

See this post How to Configure and Use Network Bridge in Ubuntu Linux (new link, thanks Samuel) on how to create the bridge with NetworkManager. It explains that you

  1. Use NetworkManager to Add a New Connection, a Bridge.
  2. When configuring the Bridge, you specify the real network connection (the device, like eth0 or enp3s12) that will be the slave of the bridge. You can verify the device of the network connection if you run ip route list 0.0.0.0/0.
  3. Then, you can remove the old network connection and just keep the slave. The slave device (bridge0) will now be the device that gets you your LAN IP address.

At this point you would have again network connectivity. Here is the new device, bridge0.

$ ifconfig bridge0
bridge0 Link encap:Ethernet HWaddr 00:e0:4b:e0:a8:c2 
 inet addr:192.168.1.64 Bcast:192.168.1.255 Mask:255.255.255.0
 inet6 addr: fe80::d3ca:7a11:f34:fc76/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:9143 errors:0 dropped:0 overruns:0 frame:0
 TX packets:7711 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000 
 RX bytes:7982653 (7.9 MB) TX bytes:1056263 (1.0 MB)

Creating a new profile in LXD for bridge networking

In LXD, there is a default profile and then you can create additional profile that either are independent from the default (like in the macvlan post), or can be chained with the default profile. Now we see the latter.

First, get a list of all available existing profiles. There is a single profile, the default one from LXD, and is used by 11 LXD containers. This means that this LXD installation has 11 containers.

$ lxc profile list
+---------------+---------+
| NAME          | USED BY |
+---------------+---------+
| default       | 11      |
+---------------+---------+

Then, create a new and empty LXD profile, called bridgeprofile.

$ lxc profile create bridgeprofile

Here is the fragment to add to the new profile. The eth0 is the interface name in the container, so for the Ubuntu containers it does not change. Then, bridge0 is the interface that was created by NetworkManager. If you created that bridge by some other way, add here the appropriate interface name. The EOF at the end is just a marker when we copy and past to the profile.

description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: bridge0
    type: nic
EOF

Paste the fragment to the new profile.

$ cat <<EOF | lxc profile edit bridgeprofile
(paste here the full fragment from earlier)

The end result should look like the following.

$ lxc profile show bridgeprofile
config: {}
description: Bridged networking LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: bridge0
    type: nic
name: bridgeprofile
used_by:

Now, list again the profiles so that we can verify the newly created profile, bridgeprofile. It is there, and it is not used yet by a LXD (lex-dee) container.

$ lxc profile list
+---------------+---------+
| NAME          | USED BY |
+---------------+---------+
| bridgeprofile | 0       |
+---------------+---------+
| default       | 11      |
+---------------+---------+

If it got messed up, delete the profile and start over again. Here is the command.

$ lxc profile delete profile_name_to_delete

Creating containers with the bridge profile

Now we are ready to create a new container that will use the bridge. We need to specify first the default profile, then the new profile. This is because the new profile will overwrite the network settings of the default profile.

$ lxc launch -p default -p bridgeprofile ubuntu:x mybridge
Creating mybridgeStarting mybridge

Here is the result.

$ lxc list
+-------------+---------+---------------------+------+
| mytest | RUNNING | 192.168.1.72 (eth0)      |      |
+-------------+---------+---------------------+------+
| ...                                         | ...  |

The container mybridge is accessible from the local network.

Changing existing containers to use the bridge profile

Suppose we have an existing container that was created with the default profile, and got the LXD NAT network. Can we switch it to use the bridge profile?

Here is the existing container.

$ lxc launch ubuntu:x mycontainer
Creating mycontainerStarting mycontainer

Let’s assign mycontainer to use the new profile, “default,bridgeprofile”.

$ lxc profile assign mycontainer default,bridgeprofile

Now we just need to restart the networking in the container.

$ lxc exec mycontainer -- systemctl restart networking.service

This can take quite some time, 10 to 20 seconds. Be patient. Obviously, we could simply restart the container. However, since it can take quite some time to get the IP address, it is more practical to know exactly when you get the new IP address.

Let’s see how it looks!

$ lxc list ^mycontainer$
+----------------+-------------+---------------------+------+
| NAME           | STATE       | IPV4                | IPV6 |
+----------------+-------------+---------------------+------+
| mycontainer    | RUNNING     | 192.168.1.76 (eth0) |      |
+----------------+-------------+---------------------+------+

It is great! It got a LAN IP address! In the lxc list command, we used the filter ^mycontainer$, which means to show only the container with the exact name mycontainer. By default, lxc list does a substring search when it tries to match a container name. Those ^ and $ characters are related to Linux/Unix in general, where ^ means start, and $ means end. Therefore, ^mycontainer$ means the exact string mycontainer!

Changing bridged containers to use the LXD NAT

Let’s switch back from using the bridge, to using the LXD NAT network. We stop the container, then assign just the default profile and finally start the container.

$ lxc stop mycontainer
$ lxc profile assign mycontainer default
Profiles default applied to mycontainer
$ lxc start mycontainer

Let’s have a look at it,

$ lxc list ^mycontainer$
+-------------+---------+----------------------+--------------------------------+
| NAME        | STATE   | IPV4                 | IPV6                           |
+-------------+---------+----------------------+--------------------------------+
| mycontainer | RUNNING | 10.52.252.101 (eth0) | fd42:cba6:...:fe10:3f14 (eth0) |
+-------------+---------+----------------------+--------------------------------+

NOTE: I tried to assign the default profile while the container was running in bridged mode. It made a mess with the networking and the container could not get an IPv4 IP address anymore. It could get an IPv6 address though. Therefore, use as a rule of thumb to stop a container before assigning a different profile.

NOTE #2: If your container has a LAN IP address, it is important to stop the container so that your router’s DHCP server gets the notification to remove the DHCP lease. Most routers remember the MAC address of a new computer, and a new container gets a new random MAC address. Therefore, do not delete or kill containers that have a LAN IP address but rather stop them first. Your router’s DHCP lease table is only that big.

Conclusion

In this post we saw how to selectively get ours containers to receive a LAN IP address.  This requires to set the host network interface to be the slave of the bridge. It is a bit invasive compared to using a macvlan, but offers the ability for the containers and the host to communicate with each other over the LAN.

Permanent link to this article: https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/

24 comments

5 pings

Skip to comment form

    • Jair Bolivar on March 10, 2018 at 13:54
    • Reply

    Just noticed that for the LXD ver 2.21 the command to create the new bridge profile is:

    root@lxc1:~# lxc profile list
    +———+———+
    | NAME | USED BY |
    +———+———+
    | default | 0 |
    +———+———+
    root@lxc1:~# lxc profile create bridgeprofile
    Profile bridgeprofile created

    root@lxc1:~# lxc profile list
    +—————+———+
    | NAME | USED BY |
    +—————+———+
    | bridgeprofile | 0 |
    +—————+———+
    | default | 0 |
    +—————+———+

    1. Thanks for pointing this out. It is an omission in this post, as it should show first any existing profilers, then create the new one and finally list again all profiles (to show the newly created one).

      I’ll update the post.

    • Hristian on September 10, 2018 at 16:21
    • Reply

    Hell. There is no lxc create. Ubuntu 18.04.

    1. Thanks pointing this out.

      Indeed, there is no lxc create profile subcommand because the subcommand is lxc profile create.

      I have corrected the post on this.

    • nva on September 20, 2018 at 22:36
    • Reply

    Hey thank you for useful posts. In your opinion which method is best to achieve dedicated LAN IP address, macvlan vs bridge? Was there anything changed since LXD 3.0?

    My ubuntu 18.04 host is wired to a switch and in that switch this host has its own VLAN with another computer. Given that physical port doesn’t change, is it safe to assume that all my containers will be sitting on that VLAN without being accessible by devices from other VLANs?

    Do you recommend additional security practices for host and containers, given that they’re now on the LAN/VLAN and other computer can SSH into them? Would changing default password for root and ubuntu user inside each of the containers be enough?

    1. Personally I would prefer macvlan because it has the feature of not isolating the host from the containers.

      Is there a performance issue between the two? There might be. Unless measured, I would say that both are OK.

      macvlan over vlan should work. You can easily verify that the container is indeed isolated in the vlan.

      On SSH, it is important not to enable password authentication. The default with the Ubuntu images is to use public-key authentication, which is good. By doing so, you avoid to set passwords for the root or ubuntu default accounts. By default, the root and ubuntu accounts are locked (have no password).

  1. I’d love to see this guide include networkd configuration via Netplan. I have a hate/hate relationship with netplan right now, although that’s mostly due to a lack of clarity in Netplan’s docs, and limited examples online. In principle Netplan’s great, and I’m all over yaml standardization.

    FWIW, this works for me, but I’m still not super happy with it:
    network:
    version: 2
    renderer: networkd
    ethernets:
    eno1:
    dhcp4: false
    dhcp6: false
    bridges:
    bridge0:
    dhcp4: true
    dhcp6: false
    interfaces:
    – eno1
    parameters:
    stp: false

    In this config I disable STP on the bridge (it’s on by default) so that my containers get an IP quickly, STP’s cleverness delays DHCP by 30+seconds and wreaks havoc on cloud-init scripts that depend on a link.

  2. Thank you for the time to prepare this Simos. It did not work for me on Ubuntu 18.10, not sure if its just me – been trying for days to get this working – no joy! Using macvlan, I get an IP from my router, but it is not accessible from the host. Using bridge, the host can access but it not is not accessible over the lan.

    Configured a bridge on my host as per your post:

    ifconfig

    br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.1.64 netmask 255.255.255.0 broadcast 192.168.1.255
    inet6 fe80::ecf2:6e7e:218e:4809 prefixlen 64 scopeid 0x20
    ether 70:85:c2:72:83:25 txqueuelen 1000 (Ethernet)
    RX packets 2724 bytes 799344 (799.3 KB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 1302 bytes 183962 (183.9 KB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    lxc profile show bridgeprofile

    config: {}
    description: Bridged networking LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    root:
    path: /
    pool: default
    type: disk
    name: bridgeprofile

    lxc list shows no IP and container has no network access.
    In container:
    ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet6 fe80::216:3eff:fe8d:9837 prefixlen 64 scopeid 0x20
    ether 00:16:3e:8d:98:37 txqueuelen 1000 (Ethernet)
    RX packets 11 bytes 580 (580.0 B)
    RX errors 0 dropped 2 overruns 0 frame 0
    TX packets 13 bytes 2316 (2.3 KB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    If I change the nictype to macvlan, then I get an network access and IP.

    • Andrew Berry on September 6, 2019 at 02:19
    • Reply

    I had exactly the same problem. It turns out that iptables was dropping the packets, not allowing them to be forwarded. What was odd was that some ARP packets and IPv6 packets passed through fine!

    https://superuser.com/questions/1211852/why-linux-bridge-doesnt-work

    1. Thanks for reporting back. Which distribution are you running? I do not see such rules in a default setup in Ubuntu. Do you have ufw configured?

        • Andrew Berry on September 7, 2019 at 11:43
        • Reply

        I’m on Ubuntu 18.04, running without ufw. Also, this system has been upgraded over time over the past decade, so it’s not running NetworkManager or netplan – just configs in /etc/network. Perhaps NetworkManager creates the rules automatically when you create the bridge.

    • Alfred Certain on April 16, 2020 at 01:30
    • Reply

    ¿How to use static IP addresses on Ubuntu 18.04 with several LXD? ¿Why is so difficult to get good LXD networking documentation?

    1. ¡Hola!

      When you use an unmanaged network (such as this case using a bridge, or with macvlan), then the container is exposed to the LAN, and it is up to the LAN (your LAN’s router) to respond with a static DHCP lease.

      If you use a managed network (the default private bridge, i.e. lxdbr0), then you can set the field ipv4.address for the container so that it receives a static DHCP lease from LXD.

      If the container should really get a static IP without relying on any DHCP server, then you can use cloud-init in a LXD profile to set the networking configuration. In that case, use the /cloud container images from the images: repository, or any container image from the ubuntu: repository.

      Finally, you can hand-edit the container to get a static IP address.

    • Raymund NIckel on April 19, 2020 at 13:53
    • Reply

    Hi, great guide! It´s works so far for me, but when i create a container, it´s get an ipv6 address not ipv4.
    I ran lxd init to set lxdbr0 with ipv4 ‘auto’ and ipv6 ‘none’, but i guess setting up lxdbr0 should not be related to this.
    I have the feeling, that something was missing, when setting up the bridge. i could not use the link you provided, cause it´s broken(“How to configure a Linux bridge with Network Manager on Ubuntu “).
    Do you have an idea, where this could come from, that i get ipv6 and not ipv4 adresses?

    1. Hi Raymund!

      Thanks. Indeed, the settings for lxdbr0 should not affect this. When you use a bridge (that is attached to an actual network interface compared to lxdbr0 which is a private bridge not attached to a network interface), then the container tries to get the network configuration from the LAN. There should be some DHCP server to give the configuration. Otherwise, you would need to provide some network configuration youself.

      The IPv6 address is likely some link-local address and it is not routable?

      Check that your LAN has a DHCP server that will serve the container.

        • Raymund Nickel on May 4, 2020 at 06:45
        • Reply

        Hi Simos,

        thank you for your reply! I switched from creating the network bridge by network manager to defining it with netplan and since then it works. I do not know why and what i might have done wrong but at least everything is working now. Thank you!

        Raymund

    • chris on September 13, 2020 at 12:31
    • Reply

    liked your article about macvlan, worked well but i cant use because pihole seems to need communication between host and container.
    so i liked to try this method but i can create a bridge0 device because the article you linked is down. if i look for any other article they are so different.
    could you write one simple docu how to create bridge0 ?
    i love your articles
    thanks in advance
    chris

    1. The distro that you are running on the host should dictate how to create the network bridge.
      Here is a list of network software that most Linux distros use, https://wiki.archlinux.org/index.php/Network_bridge
      For example, if you are running Ubuntu 20.04 desktop, you would be using NetworkManager.

  3. For person searching a link to setup a bridge with network manager UI :

    https://www.ubuntupit.com/how-to-configure-and-use-network-bridge-in-ubuntu-linux/

    Don’t miss the last step, activate the bridge with nmcli…. It took me two day to make it work…..

    1. Thanks Samuel!
      I updated the post with the new working link that explains how to create a network bridge in Ubuntu.

    • Neil on March 26, 2021 at 19:28
    • Reply

    Hi Simos

    tried this with Ubuntu 20.04, lxd version 4.0.5

    this command doesn’t work

    $ lxc exec mycontainer -- systemctl restart networking.service
    

    host ip is 172.17.1.24
    created a br0 in netplan

    This is the network config written by ‘subiquity’

    network:
      version: 2
      bonds:
        bond0:
          interfaces:
            - eno1
            - eno2
          parameters:
            mode: active-backup
            primary: eno1
      ethernets:
        eno1: {}
        eno2: {}
      bridges:
        br0:
          addresses:
           - 172.17.1.24/16
          dhcp4: false
          gateway4: 172.17.1.1
          nameservers:
            addresses:
             - 8.8.8.8
             - 172.17.1.104
             - 172.17.1.106
            search:
             - xxxxx.com
          interfaces:
           - bond0
          parameters:
           stp: true
           forward-delay: 4
          dhcp4: false
          dhcp6: false
    

    I am able to set static IP if I use lxdbr0 but can’t access the container from from other hosts on the lan

    root@test24:/etc/netplan# lxc profile show br0profile
    config: {}
    description: Bridged networking LXD profile
    devices:
      eth0:
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    name: br0profile
    used_by:
    - /1.0/instances/testserver1
    - /1.0/instances/testserver2
    
    root@test24:/etc/netplan# lxc config show testserver2
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: ubuntu 20.04 LTS amd64 (release) (20210325)
      image.label: release
      image.os: ubuntu
      image.release: focal
      image.serial: "20210325"
      image.type: squashfs
      image.version: "20.04"
      volatile.base_image: 46701fa2d99c72583f858c50a25f9f965f06a266b997be7a57a8e66c72b5175b
      volatile.eth0.host_name: veth7ae01572
      volatile.eth0.hwaddr: 00:16:3e:11:d5:12
      volatile.idmap.base: "0"
      volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.power: RUNNING
      volatile.uuid: d0177ed5-66cc-461f-8d2b-ca49df95009b
    devices:
      eth0:
        ipv4.address: 172.17.5.22
        name: eth0
        nictype: bridged
        parent: br0
        type: nic
    ephemeral: false
    profiles:
    - default
    - br0profile
    stateful: false
    description: ""
    

    If I use br0 then the container does not get any IP or route

    root@testserver2:~# ip addr
    1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    45: eth0@if46: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:16:3e:11:d5:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::216:3eff:fe11:d512/64 scope link
           valid_lft forever preferred_lft forever
    root@testserver2:~# ip r
    

    Any suggestions?

    • Neil on March 26, 2021 at 19:38
    • Reply

    if I use lxdbr0 I can’t reach the gateway

    root@test24:/etc/netplan# lxc shell testserver1
    root@testserver1:~# ip addr
    1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    37: eth0@if38: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:16:3e:67:5a:76 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 172.17.5.21/16 brd 172.17.255.255 scope global dynamic eth0
           valid_lft 3407sec preferred_lft 3407sec
        inet6 fe80::216:3eff:fe67:5a76/64 scope link
           valid_lft forever preferred_lft forever
    root@testserver1:~# ip r
    default via 172.17.5.10 dev eth0 proto dhcp src 172.17.5.21 metric 100
    172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.5.21
    172.17.5.10 dev eth0 proto dhcp scope link src 172.17.5.21 metric 100
    root@testserver1:~# exit
    logout
    root@test24:/etc/netplan# lxc config show testserver1
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: ubuntu 20.04 LTS amd64 (release) (20210323)
      image.label: release
      image.os: ubuntu
      image.release: focal
      image.serial: "20210323"
      image.type: squashfs
      image.version: "20.04"
      volatile.base_image: 8053cb95e852440e8e9379614e4b0c5bd0164afe1314a852a9799cf9e6f43f59
      volatile.eth0.host_name: vethc0e7b4b5
      volatile.eth0.hwaddr: 00:16:3e:67:5a:76
      volatile.idmap.base: "0"
      volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.power: RUNNING
      volatile.uuid: 3727b95e-cad5-417c-8903-b0d1b72232dd
    devices:
      eth0:
        ipv4.address: 172.17.5.21
        name: eth0
        network: lxdbr0
        type: nic
    ephemeral: false
    profiles:
    - default
    - br0profile
    stateful: false
    description: ""
    root@test24:/etc/netplan# lxc shell testserver1
    root@testserver1:~# ping 172.17.1.1
    PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.
    From 172.17.5.21 icmp_seq=1 Destination Host Unreachable
    From 172.17.5.21 icmp_seq=2 Destination Host Unreachable
    From 172.17.5.21 icmp_seq=3 Destination Host Unreachable
    ^C
    --- 172.17.1.1 ping statistics ---
    6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5113ms
    pipe 4
    root@testserver1:~# nslookup yahoo.com
    Server:         127.0.0.53
    Address:        127.0.0.53#53
    
    Non-authoritative answer:
    Name:   yahoo.com
    Address: 74.6.231.20
    Name:   yahoo.com
    Address: 98.137.11.164
    Name:   yahoo.com
    Address: 74.6.143.26
    
    • Neil on March 27, 2021 at 18:50
    • Reply

    got it to work

    on the container had to set the network settings

    • Fabio on July 3, 2023 at 19:19
    • Reply

    I believe the culprit was docker. I was having the same issue solved by adding the rule manually as described on the link. But what got me thinking was why the chain FORWARD was set to DROP. It was docker messing up with the iptables rules in the end.

    I end up installing iptables-persistent, no saving the default rules and adding this to my /etc/iptables/rules.v4 file

    Forwarding rule for br0

    *filter
    :FORWARD DROP
    -A FORWARD -i br0 -o br0 -j ACCEPT
    COMMIT

  1. […] on the disk and the networking. The default LXD profile is suitable for this. You may use a bridge profile or macvlan profile […]

  2. […] can also expose the Kali container on the network using either a bridge or macvlan, and it is as if it was a separate and independent […]

  3. […] The following blog post goes into detail on how to get LXC containers assigned IP addresses from the host network’s DHCP using the various LXD networking types. Definitely insightful into understanding the various network types: https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridg&#8230; […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.