LXD Distrobuilder support for NixOS

I’m trying at add support to LXD’s distrobuilder. I’m hoping to provide a similar experience to other distributions which have images built automatically by the LXD team using their CI. Doing so should allow NixOS to be available through the Canonical image server and make it more approachable for new users.

At the same time I’m hoping to provide a more NixOS like experience using the existing LXD tooling. For example, allowing users who are building custom images to follow the LXD pattern but provide Nix code. I also eventually want to add support for a user to provide inputs (e.g. flake) to the metadata when an instance is created from the image.

Unfortunately I’m a bit stuck on how to proceed so I’m posting this to see if anybody has any good ideas, or has better insight. I’m still relatively new to using Nix and NixOS so I don’t have a lot of depth yet.

How distrobuilder seems to work is that it extracts a system tarball, “generates” some files into the structure, and chroots to run some commands. It’s specifically this last step that I am a bit hung up on, as I am unsure how to initiate a nixos-rebuild from within an un-activated system tarball. See this example of an image config.

I’m unsure how I could bootstrap this enough to allow an arbitrary shell script to run, since the un-activated tarball doesn’t seem to have a way to easily discover where bash, nixos-rebuild, etc are. So what would be the best way to allow for an arbitrary nixos-rebuild of a system tarball?

Other potentially relevant information:

6 Likes

So fromy my experience running serveral NixOS instances in lxd I found out you need to tweak the system config in order to start properly:

my configuration.nix:

  boot.isContainer = true;
  boot.loader.initScript.enable = true;

  systemd.suppressedSystemUnits = [
    "console-getty.service"
    "getty@.service"
    "autovt@.service"
    "systemd-udev-trigger.service"
    "systemd-udevd.service"
    "sys-fs-fuse-connections.mount"
    "sys-kernel-debug.mount"
    "dev-mqueue.mount"
  ];

  systemd.sockets.systemd-journald-audit.enable = false;

Beside that I’m using a profile to get it run:

asbachb@nixos:~$ lxc profile show nixos
config:
  raw.lxc: |-
    lxc.init.cmd = /sbin/init
    lxc.mount.entry = proc mnt/proc proc create=dir 0 0
    lxc.apparmor.profile = unconfined

I guess the main problem is that you cannot (at least the time I did my research you could not) bake the profile settings into the image itself. So from my understanding even if we had the image in distrobuilder we’d still need to have a profile or set these settings for each container manually

1 Like

I’m not sure about those config settings and why or if they are actually required. I don’t set the lxc.init.cmd for example. I’d also like to explore modifying the images to not require them at all.

For now, I need a way to actually apply a nix configuration in a raw system tarball so that I can configure container/VM images in distrobuilder.

Actually my image is based on nixos-generate ( https://github.com/nix-community/nixos-generators) where the image generated is based on a configuration file you pass.

The existing NixOS LXD image seems to be designed around the idea that it should not need to run any chroot commands to build, so a lot of the necessary setup is actually only done on boot. This includes things like:

  • Populating /etc
  • Initializing the nix store

Moreover on NixOS it’s generally expected that a nix daemon handles the building process, so it might be best if can be done in a properly booted system.

However all of this seems to be at odds with my understanding about how deploying NixOS containers should work. It seems that the lxdImage release is just a way for users to try out NixOS in LXD and isn’t really something to use for deployment. You can see in a previous comment that there’s already nixos-generator, but I guess I can try to address your points directly:

This process seems to be completely different from how NixOS generally works. A ‘base image’ makes no sense for NixOS because all you need for a NixOS system to boot is the system closure in /nix/store. For containers empty /proc, /sys, /dev seem to be needed so you can mount the appropriate special filesystems into them, but then all you need to do is run /nix/store/...-nixos-system-.../init as PID 1 and it will start. You don’t even need Nix to run NixOS.

The build process itself happens completely independently from the system that ends up runnning NixOS, whether it be a baremetal machine or VM or a container. As an example, for cross compiled system closures, the build machine might not be able to run programs found in the resulting system closure at all.

From your description it seems that distrobuilder expects the build process to happen inside the container itself, which is generally a huge inconvenience. The only advantage I can think of is that it is isolated from the host system, though it’s still unnecessary as anything that has Nix can do this job.

It seems that the problem is the two build systems, distrobuilder and Nix, are trying to fight each other. As build systems, both really should be running on the host, but currently what you’re trying is to ‘demote’ Nix into inside distrobuilder and inside the container. I’m completely unfamiliar with distrobuilder, but I wonder if it’s possible to add a new generator called nixos that just calls out to Nix (on a host or in a different container) to build the system configuration and copies the result inside. So like what you’d do in a Dockerfile, using FROM nixos/nix as builder, then build the config inside it along with some metadata, then COPYing useful files over.

1 Like

Thanks for the reply! I think your assumption about why it’s done inside the container is correct, as it isolates the process. As you point out, this is unnecessary for nix.

I think you’re probably on the right track with your suggestion. It’s a solution I’d considered, but will require some more significant changes to distrobuilder. I’ll look into this some more and probably chat with the LXD team once I have a potential implementation. One vaguely similar prior art is the debootstrap builder.

One of the other challenges here is that distrobuilder is designed to build both container and VM images. This should be relatively trivial to solve by just feeding a different config to nix, but is a use case I want to make sure I correctly design for.

The only other crazy idea I’ve come up with is to have a NixOS-specific image server and build all the tooling inside the Nix ecosystem. This would solve one of the requirements I’m working with, which is that a user could easily pull and run an LXD container/VM.

1 Like