Howto setup LXD on NixOS with NixOS guest using unmanaged bridge network interface

I recently setup LXD on my NixOS machine in order to run a guest NixOS container.

LXD has a lot of configuration options, and it is sometimes difficult to figure out the right setup for your use-case. Networking is especially complex. LXD has support for many different types of networking setups. By default, LXD pushes you to use what they refer to as a managed bridge network. In this setup, when you launch a container, LXD will create a bridge interface for you automatically for networking in your container. It then runs dnsmasq on your host to do things like DNS and DHCP inside your containers.

While these managed bridge networks are convenient, I found that dnsmasq would frequently crash. I would then lose networking in my container until I restarted LXD. I realized that I didn’t need anything dnsqmasq was providing, so I instead setup LXD to use an unmanaged bridge network.

An unmanaged bridge network is where you setup a bridge interface on your own, and just hand it to LXD to use. In this setup, LXD doesn’t run dnsmasq, so you are free to setup DNS/networking between your host and container however you like. I found this quite difficult to figure out, so I wanted to put together a guide for anyone else interested.

This guide walks through installing and setting up LXD on the host, creating the LXD NixOS image, and running the LXD NixOS container.

Setting up the LXD host

The following NixOS module will install LXD and setup a bridge interface that we will use. It also sets up things like firewall rules and a NAT for accessing the internet from our container. Comments are inline:

# This module enables LXD.
#
# This sets up networking for an unmanaged bridge to be used with LXD.
#
# Note that by default LXD uses a managed bridge, that also runs dnsmasq to do
# things like DNS and DHCP to your containers.  I don't need all of that, and
# dnsmasq seems to crash quite often, so this module just sets up an unmanaged
# bridge.

{ config, lib, pkgs, ...}:

{
  # Enable LXD.
  virtualisation.lxd = {
    enable = true;

    # This turns on a few sysctl settings that the LXD documentation recommends
    # for running in production.
    recommendedSysctlSettings = true;
  };

  # This enables lxcfs, which is a FUSE fs that sets up some things so that
  # things like /proc and cgroups work better in lxd containers.
  # See https://linuxcontainers.org/lxcfs/introduction/ for more info.
  #
  # Also note that the lxcfs NixOS option says that in order to make use of
  # lxcfs in the container, you need to include the following NixOS setting
  # in the NixOS container guest configuration:
  #
  # virtualisation.lxc.defaultConfig = "lxc.include = ''${pkgs.lxcfs}/share/lxc/config/common.conf.d/00-lxcfs.conf";
  virtualisation.lxc.lxcfs.enable = true;

  # This sets up a bridge called "mylxdbr0".  This is used to provide NAT'd
  # internet to the guest.  This bridge is manipulated directly by lxd, so we
  # don't need to specify any bridged interfaces here.
  networking.bridges = { mylxdbr0.interfaces = []; };

  # Add an IP address to the bridge interface.
  networking.localCommands = ''
    ip address add 192.168.57.1/24 dev mylxdbr0
  '';

  # Firewall commands allowing traffic to go in and out of the bridge interface
  # (and to the guest LXD instance).  Also sets up the actual NAT masquerade rule.
  networking.firewall.extraCommands = ''
    iptables -A INPUT -i mylxdbr0 -m comment --comment "my rule for LXD network mylxdbr0" -j ACCEPT

    # These three technically aren't needed, since by default the FORWARD and
    # OUTPUT firewalls accept everything everything, but lets keep them in just
    # in case.
    iptables -A FORWARD -o mylxdbr0 -m comment --comment "my rule for LXD network mylxdbr0" -j ACCEPT
    iptables -A FORWARD -i mylxdbr0 -m comment --comment "my rule for LXD network mylxdbr0" -j ACCEPT
    iptables -A OUTPUT -o mylxdbr0 -m comment --comment "my rule for LXD network mylxdbr0" -j ACCEPT

    iptables -t nat -A POSTROUTING -s 192.168.57.0/24 ! -d 192.168.57.0/24 -m comment --comment "my rule for LXD network mylxdbr0" -j MASQUERADE
  '';

  # ip forwarding is needed for NAT'ing to work.
  boot.kernel.sysctl = {
    "net.ipv4.conf.all.forwarding" = true;
    "net.ipv4.conf.default.forwarding" = true;
  };

  # kernel module for forwarding to work
  boot.kernelModules = [ "nf_nat_ftp" ];
}

The big take-away from this is that LXD is installed, and we have a bridge interface called mylxdbr0 that we can use.

I ran this on both nixos-22.05 at commit c06d5fa9c60, and nixos-unstable at commit 2da64a81275b68. Both of these commits are from around 2022-09-09.

Setting up LXD

One of the unfortunate things about LXD is that it requires some manual setup. Unlike most other things in NixOS, LXD is not fully declarative.

Before running lxc (the command to interact with the LXD daemon) for the first time, you need to initialize it and setup the default container settings. You can do this interactively with the command lxc init, or you could do this semi-declaratively by passing lxc init a “preseed” file with all the settings we want to use:

$ cat my-preseed-file.yaml
config:
  images.auto_update_interval: "0"
networks: {}
storage_pools:
- config:
    source: /var/lib/lxd/storage-pools/default
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: Default LXD profile
  devices:
    root:
      path: /
      pool: default
      type: disk
  name: default
projects:
- config:
    features.images: "true"
    features.networks: "true"
    features.profiles: "true"
    features.storage.volumes: "true"
  description: Default LXD project
  name: default

Then tell lxc to use this:

$ lxd init --preseed < my-preseed-file.yaml

The things to note about this preseed file:

  • It sets up a root device that just uses a file on disk. This is simple, but you might want to explicitly run lxc init if you want to setup something like a ZFS-backed root filesystem.
  • This does not setup a network. We’ll explicitly add our unmanaged bridge interface in a later step.

Now we need to create the guest LXD NixOS image.

Create NixOS image for use as LXD guest

The nixos-generators tool makes it easy to create a NixOS image for LXD.

First, you need a configuration.nix for the NixOS guest in your current directory. Here’s the configuration.nix I’m using. Most things are commented in-line:

{ config, pkgs, lib, modulesPath, ... }:

{
  imports =
    [ # Need to load some defaults for running in an lxc container.
      # This is explained in:
      # https://github.com/nix-community/nixos-generators/issues/79
      "${modulesPath}/virtualisation/lxc-container.nix"

      # other modules:
      ...
    ];

  # This doesn't do _everything_ we need, because `boot.isContainer` is
  # specifically talking about light-weight NixOS containers, not LXC. But it
  # does at least gives us something to start with.
  boot.isContainer = true;

  # These are the locales that we want to enable.
  i18n.supportedLocales = [ "C.UTF-8/UTF-8" "en_US.UTF-8/UTF-8" "ja_JP.UTF-8/UTF-8" ];

  # Make sure Xlibs are enabled like normal.  This is disabled by
  # lxc-container.nix in imports.
  environment.noXlibs = false;

  # Make sure command-not-found is enabled.  This is disabled by
  # lxc-container.nix in imports.
  programs.command-not-found.enable = true;

  # Disable nixos documentation because it is annoying to build.
  documentation.nixos.enable = false;

  # Make sure documentation for NixOS programs are installed.
  # This is disabled by lxc-container.nix in imports.
  documentation.enable = true;

  # `boot.isContainer` implies NIX_REMOTE = "daemon"
  # (with the comment "Use the host's nix-daemon")
  # We don't want to use the host's nix-daemon.
  environment.variables.NIX_REMOTE = lib.mkForce "";

  # Suppress daemons which will vomit to the log about their unhappiness
  systemd.services."console-getty".enable = false;
  systemd.services."getty@".enable = false;

  # Use flakes
  nix = {
    package = pkgs.nixUnstable;
    extraOptions = ''
      experimental-features = nix-command flakes
    '';
   };

  # We assume that LXD will create this eth1 interface for us.  But we don't
  # use DHCP, so we configure it statically.
  networking.interfaces.eth1.ipv4.addresses = [{
    address = "192.168.57.50";
    prefixLength = 24;
  }];

  # We can access the internet through this interface.
  networking.defaultGateway = {
    address = "192.168.57.1";
    interface = "eth1";
  };

  # The eth1 interface in this container can only be accessed from my laptop
  # (the host).  Unless the host in compromised, I should be able to trust all
  # traffic coming over this interface.
  networking.firewall.trustedInterfaces = [
    "eth1"
  ];

  # Since we don't use DHCP, we need to set our own nameservers.
  networking.nameservers = [ "8.8.4.4" "8.8.8.8" ];

  networking.hostName = "lxc-nixos";

  # This value determines the NixOS release with which your system is to be
  # compatible, in order to avoid breaking some software such as database
  # servers. You should change this only after NixOS release notes say you
  # should.
  system.stateVersion = "22.05"; # Did you read the comment?
}

There shouldn’t be anything too surprising in here. You may want to add some other modules to imports if you want to install extra programs or services.

Now that you have this configuration.nix, you can use nixos-generators to create the LXD image:

$ nix-shell -p nixos-generators
$ METAIMG="$(nixos-generate -f lxc-metadata)"
$ IMG="$(nixos-generate -c ./configuration.nix -f lxc)"

You can now import this image into LXD. The image is named nixos:

$ lxc image import --alias nixos "${METAIMG}" "${IMG}"

Show the image:

$ lxc image show

Next we need to create a container based on this image.

Create the NixOS container

Now we create a container based on this image. The container is named lxc-nixos:

$ lxc init nixos lxc-nixos -c security.nesting=true

-c security.nesting=true is necessary for using Nix’s sandbox in the container. You probably want to enable this if you intend to build with Nix in the container.

You must now add the unmanaged bridge interface on the host to the container:

$ lxc config device add lxc-nixos eth1 nic nictype=bridged parent=mylxdbr0

This command adds a device called eth1 to the instance lxc-nixos where the host interface is called mylxdbr0. The interface in the container will also get called eth1.

Run the NixOS container

You can now finally run the container:

$ lxc start lxc-nixos

You can use the following command to confirm the container is running, and confirm the IP address was set correctly in the guest NixOS configuration:

$ lxc list

You can start a shell in the container to play around:

$ lxc exec lxc-nixos -- /run/current-system/sw/bin/bash

From here, I generally setup the container so I can SSH into it. I then access the container with ssh from the host:

$ ssh me@192.168.57.50

You can stop the container by either running sudo poweroff from within the container, or from the host:

$ lxc stop lxc-nixos

Conclusion

Following these steps should set you up with an LXD NixOS container. Setting up a container with an unmanaged bridge is a little bit more work than just using a managed bridge, but a little more robust since you don’t need to have dnsmasq running on the host.

Additional Documentation

Here are a few links you might find interesting:

18 Likes

It is an incredible effort to write howto’s and no easy task. So much thanks for this writeup, it’s very useful to me because after trying out nixos-containers, I think that LXD/LXC comes much closer to what I need and this howto is very useful.

4 Likes

Hi, I soon plan to test some containers in NixOS, I am giving my first baby steps at it now. Right now I have some ansible playbooks for generating a bunch of lxd containers, each with its cloud-int config, that run on Ubuntu.
I am not sure if you are aware of lxd projects but it would help you create an environment that would let you reduce the initial setup steps, letting users to control a sub set of lxd features. In some use cases, like mine, it is enough. Not sure how to activate it on Nix, but it seems doable.

Cheers

3 Likes

Thanks for your suggestion! This seems very useful - so far I’ve found LXD/LXC to be more handy for full environments, and nixos-containers for sandboxed individual processes or set of processes. But I haven’t looked into lxd projects yet. Can you tell more about your usecase? For me, I use an lxc container as the ssh jump host to systems that only listen on internal VPN (wireguard) addresses. I also have an old container to sandbox and segregate incoming and outgoing email on, but I still need to migrate that one to NixOS.

I have lxc containers set up for many “server side” software on my homelab - plex and syncthing for instance. I’ve configured lxd to give mount on all the containers a ZFS pool, where each software stores its configurations and data, and live on the same subnet. I do not access access those containers from outside my home network, but it could be set up. Ansible is used to set up the creation and deletion of the containers. This is what is running in Ubuntu.
I’ve started now with NixOs, and trying to replicate the setup running directly on the server without lxd. It seems to me possible to create a setup with lxd in the future, and without Ansible.
I see this home server “automation” as a hobby, and lately I am not in a position to dedicate time to it, as you can notice from the late reply. :slight_smile:

1 Like

lxc image show might have been renamed? lxc image list works here.

1 Like

LXD and the fork Incus both now can be preseeded with a NixOS option. See virtualisation.lxd.preseed and virtualisation.incus.preseed

2 Likes

I followed this guide and I am facing an issue, that I don’t understand if it is expected or not. I successfully execute nixos-generate and then launch my container. It correctly takes the configuration I provide it. However, when I go to check the instance’s configuration.nix it is completely empty.

{ config, pkgs, ... }:

{
  imports = [  ];

  
}

Is that expected?

In order to add some package (say nodejs) I have to firstly copy the configuration from the host and then add the new package, otherwise it fails to rebuild.
My host is debian, could that cause problems?

It is no longer necessary to run nixos-generate to create a container image. In fact for Incus, and LXD LTS users there are images now published in the LXC image server https://images.linuxcontainers.org/. Note that LXD LTS users will lose this access on 2024-05-01, and considering you’re on Debian you may wish to consider switching to the fork Incus.

─❯ incus image list images:nixos
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
|             ALIAS             | FINGERPRINT  | PUBLIC |              DESCRIPTION              | ARCHITECTURE |      TYPE       |   SIZE    |          UPLOAD DATE          |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/23.11 (3 more)          | 4a84a70fb432 | yes    | Nixos 23.11 amd64 (20240216_01:02)    | x86_64       | CONTAINER       | 128.94MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/23.11 (3 more)          | cee1d901cca2 | yes    | Nixos 23.11 amd64 (20240216_01:02)    | x86_64       | VIRTUAL-MACHINE | 467.54MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/23.11/arm64 (1 more)    | 0881cc69ae39 | yes    | Nixos 23.11 arm64 (20240216_01:03)    | aarch64      | VIRTUAL-MACHINE | 474.89MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/23.11/arm64 (1 more)    | c436a2e0a0e3 | yes    | Nixos 23.11 arm64 (20240216_01:03)    | aarch64      | CONTAINER       | 123.78MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/unstable (3 more)       | 711ad413f1d4 | yes    | Nixos unstable amd64 (20240216_01:03) | x86_64       | VIRTUAL-MACHINE | 467.31MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/unstable (3 more)       | b15681a4f2cf | yes    | Nixos unstable amd64 (20240216_01:03) | x86_64       | CONTAINER       | 174.39MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/unstable/arm64 (1 more) | 95242674aa96 | yes    | Nixos unstable arm64 (20240216_01:03) | aarch64      | VIRTUAL-MACHINE | 475.38MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+
| nixos/unstable/arm64 (1 more) | c0a02e8fd464 | yes    | Nixos unstable arm64 (20240216_01:03) | aarch64      | CONTAINER       | 168.44MiB | Feb 16, 2024 at 12:00am (UTC) |
+-------------------------------+--------------+--------+---------------------------------------+--------------+-----------------+-----------+-------------------------------+

─❯ incus launch images:nixos/unstable --config security.nesting=true --ephemeral
Launching the instance
Instance name is: learning-vulture

─❯ incus exec learning-vulture bash

[root@nixos:~]#
1 Like

I encountered the same and assumed it was expected - not sure why exactly though.

system.copySystemConfiguration = true; config would normally handle that but it doesn’t work, config files aren’t copied

Some related discussion on this post:

Can you try the following:

  1. build your image
  2. in your container / VM, run sudo -i nix-channel --update, by default they’re not which would cause future updates from the container to fail
  3. create a configuration.nix with ONLY additional packages you want to add (don’t copy the OG configuration.nix for the container itself)
    => OG build will then be the base, the new configuration.nix inside the container will add config on top of that OG build so you won’t lose what’s already setup and installed from the initial container build