Extra-container - Run declarative containers without full system rebuilds

Example:

sudo extra-container create --start <<'EOF'
{
  containers.demo = {
    privateNetwork = true;
    hostAddress = "10.250.0.1";
    localAddress = "10.250.0.2";

    config = { pkgs, ... }: {
      networking.firewall.allowedTCPPorts = [ 50 ];

      systemd.services.hello = {
        wantedBy = [ "multi-user.target" ];
        script = "while true; do echo hello | ${pkgs.netcat}/bin/nc -lN 50; done";
      };
    };
  };
}
EOF

curl 10.250.0.2:50 # Returns 'hello' from the container
7 Likes

This is awesome!
Other than that, I’m playing around with it already and writing some containers that I’ll use instead of docker-compose or nixos-shell. Given that it’s using a super solid foundation I don’t think there’ll be any issues, but that’s just what we needed.
Thank you very much!

(removed ignorant comment about list missing)

Also update action is missing. Do I have to destroy/add/start each time configuration changes?

@danbst, no, just run extra-container create --restart with an updated config. Or use --start to start the container even if it’s not already running.
But note that this doesn’t clear the contents of the container directory in /var/lib/containers/. Would this be required for your use case?

How does this differ from NixOps with a container backend?

@manveru, your comment wasn’t ignorant at all, I forgot to document the list command. It’s fixed in the latest update.

@tom, the NixOps container backend uses imperative containers which lack some options of declarative containers.

1 Like

Perhaps add this as another nixops target/option? I’m seeing a period of rapid progress of experimentation and think the community should try to consolidate and mature some tools before it becomes too fragmented.

1 Like

I think this should be integrated into nixos-container so that there’s feature parity between imperative and declarative containers. There are a few kinks left to work out, then I’ll open an issue.

2 Likes

I had success with running declarative containers with this piece of software on non-NixOS (Ubuntu). I’ve documented problems in extra-container on non-NixOS · Issue #1 · erikarvstedt/extra-container · GitHub

That’s really awesome!

1 Like

I’ve been using this on-and-off and it still seems to work fine! I used it for Towards an integrated search engine for the Nix community - #11 by deliciouslytyped

PS: Oh! Apparently it’s still maintained too. I didn’t notice that.

1 Like

extra-container 0.5 has just been released. (Changelog)

Highlights

  • Generic support for systemd-based Linux distros, like Arch or Debian. (Thanks @danbst)

  • Command shell.
    Starts a shell session for interacting with the container.

    read -d '' src <<'EOF'
    {
      containers.demo = {
        # New feature: extra config options for setting up a private network
        extra.addressPrefix = "10.250.0"; 
        extra.enableWAN = true;
      };
    }
    EOF
    extra-container shell -E "$src"
    
    # Run command inside the container
    c curl ifconfig.me
    # Ping container
    ping -c 1 $ip
    
    # The container is destroyed when exiting the shell
    
    # Run a single command and exit
    extra-container shell -E "$src" --run c hostname
    # => demo
    
3 Likes

That seems really cool if container builds can be made fast. I’m probably mixing things up, and I can’t test right now, but don’t container builds have a long squashfs step?

There’s no squashfs step for NixOS containers, they share the host’s nix store.
A simple extra-container builds and starts within ~2 sec on a desktop system with warm caches.

3 Likes

Ok thanks, I have no idea what I was doing then. - ah, must have been when I was testing my containers with the vm rebuild! (which is still not a container problem.)

This is really cool. I have a use case where it would be particularly nice if I could run multiple instances of the same container. (Normally I would just say, multiple containers from the same specification, which somehow seems more proper). Is this currently possible, or something that might be possible to add?

Thanks!

You can simply define multiple containers with the same config:

extra-container create -s <<'EOF'
{ config, pkgs, lib, ... }:
let
  sharedConfig = {
    config = {
      systemd.services.hello = {
        wantedBy = [ "multi-user.target" ];
        script = "while true; do echo hello | ${pkgs.netcat}/bin/nc -lN 50; done";
      };
    };
  };
in {
  containers = {
    instance1 = sharedConfig;
    instance2 = sharedConfig;
  };
}
EOF
1 Like

Thanks, I see.

My use cases is essentially creating an isolated environment for users in a dynamic way, so I can’t really define all instances ahead of time. In that case, I wonder if this would work, and if there would be any downsides to it? Basically, i would have a script run this command, with instanceName dynamically generated:

extra-container create -s <<'EOF'
{ config, pkgs, lib, ... }:
let
  sharedConfig = {
    config = {
      systemd.services.hello = {
        wantedBy = [ "multi-user.target" ];
        script = "while true; do echo hello | ${pkgs.netcat}/bin/nc -lN 50; done";
      };
    };
  };
in {
  containers = {
    instanceName= sharedConfig;
  };
}
EOF

sharedConfig could probably be factored out, though I don’t know if it matters since I think Nix should be able to tell if nix expressions defined in multiple places are effectively identical.

What would be the typical start up time of a container that doesn’t have any services or other extra initialization in this fashion? I’m hoping fairly quick - otherwise I may stick with a running container, possibly going with your original suggestion and firing up multiple instances and rotating between them as a compromise.

What exactly should these isolated environments contain?

Typically, these would just be custom-software environments (containing a number of packages from nixpkgs), so nothing very special. I might rely on the fact that these are using systemd-nspawn containers to enforce some restrictions, but these would be done at lower levels (iiirc through machinectl, cgroups, etc). But basically I was just using the term “isolated environment” in the usual sense of what a container is: isolated filesystems certainly, and isolated processes, but possibly other things as well - isolated users for instance. I assume the underlying packages, at least, are shared through the host’s /nix/store as usual.