Independent update of nixos-container configurations

I’ve started using nixos-container instead of keeping a separate NixOS VM to experiment with things independently from the main system and each other. It’s fantastic now NixOS experience with configuration.nix applies to containers almost without any difference, I usually go by writing a container declaration like try = { autoStart = true; config = { config, pkgs, ... }: { imports = [ ./try.nix ]; }; }; and then try.nix file is nearly the same thing as the main config but for the container. Awesome reuse of concepts.

The most difference of all is updating the configuration and switching to the new config. Drawbacks with the containers for me are:

  • Looks like it’s “all or nothing”. The main system update transaction would abort if any of the container configurations updates have errors. I would rather have them isolated: update the main system, update as many containers as possible (to their new config, or to new versions if --upgrade), report errors for those that failed. This way having experimental containers wouldn’t prevent the production containers and the main system from receiving updates or being operational.
  • The containers are brought down and updated and then restarted in a batch rather than one-by-one. To minimize downtime for each containerized payload and isolate failures I’d rather prefer that they updated independently one after another.
  • The containers are fully restarted on any minor configuration change, while the main system is able to restart the affected services only (this does not always work though, but most of the time). This might be OK for production runs, but very inconvenient when debugging the configuration and being logged into the container with a terminal, maybe with some helper programs running.

Are there any configuration tricks to achieve this? I’ve seen mentions of “nixops with container backend” for at least some of these goals, but I’m not sure how much it applies.

1 Like

Sounds like what you want is imperative container management. It can be a little buggy though.

Oh no, not at all. I still want the system state to be fully described in a declarative manner.
I think that the things I want do not really contradict with declarative configuration.

I just would like the system state update to be a little bit more granular and fault-tolerant around the containers.

With imperative containers, the containers are still configured declaratively, just that they’re independent of the host system, can be rebuilt and upgraded independently, because they have “their own nixos-rebuild” except it’s called nixos-container :slight_smile:

Is this not what you need?

Yea unfortunately what you want is simply not supported currently. I imagine a patch to nixpkgs could fix this, but currently imperative containers are the solution.

Recently started using nixos-container to test/review NixOS modules PRs, it’s great! Example:

{
  description = "NixOS container";

  inputs = {
    nixpkgs.url = "git+file:///home/tom/nixpkgs";
  };

  outputs = { self, nixpkgs }: {
      nixosConfigurations.container = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [({pkgs, config, ... }: {
          boot.isContainer = true;
          networking.useDHCP = false;
      })];
    };
  };
}


and I can use a flake to speed things up. This is much faster and less hassle than rebuilding a host machine.

nixos-container update nixreview --flake . --impure
nixos-container root-login nixreview
3 Likes