Sharing Nix store between containers

Is it possible to, for example, bind-mount /nix or /nix/store into a container, and share the store between?

I guess the issue would be with simultaneous access of the store, which normally is solved by the daemon, but in this case, the daemon wouldn’t exactly work for this purpose.

Is there any file-based locking mechanism support for the store, so that multiple “nix” instances can access it at once? I’m aware of “stateVersion” (or so) that locks the state schema to a given revision.

Thanks.

Related:

-Any reason not to share a read-only `/nix/store` mount between servers?

And docker image available here:

4 Likes

The current “nix” way to do this, I guess, is “nix-serve” and multiple copies of the store.

One way for the networked mount:

  • read-only share to all nix instances
  • send the ID of the next system derivation to the target machines
  • wait until the derivation and all deps are available in the read-only store
  • “switch” to the derivation (by the usual way) without writing anything to the store

I haven’t tried this, but I think the closest concept would be --devices

No, that’s not the same thing. Devices just maps a /dev/ path into the container.

You can bind mount with “-v /nix:/nix”

didn’t realize -v did a bind mount.

No, that’s not the same thing. Devices just maps a /dev/ path into the container.

You can bind mount with “-v /nix:/nix”

As far as I understand (but I use nsjail), bind-mounting /nix will make filesystem-addressed unix domain socket used by the Nix daemon present inside the container, which should resolve the problem.

1 Like

This totally works. Awesome. You can share the /nix directory between machines as long as the socket is shared as well.

Could it be better to specifically mount /nix/store ? I tried it, and it worked, except both containers were downloading the same things (twice) (or at least, the logs indicated this).

# copy the initial nix store files out
IMAGE=skiffos/skiff-core-nixos:x86_64
mkdir -p ./nix

# copy initial nix store out
docker run --privileged --rm -d --name=nix-1 -v $(pwd)/nix:/nix-next/ $IMAGE
docker exec -it nix-1 /run/current-system/sw/bin/bash -ic "nix run nixpkgs.nix -c rsync --info=progress2 -ra /nix/ /nix-next/"
docker rm -f nix-1

# run nix daemon in nix-1
docker run --privileged --rm -d --name=nix-1 -v $(pwd)/nix:/nix/ $IMAGE
docker exec -ti nix-1 /run/current-system/sw/bin/bash -ic "systemctl start nix-daemon"
# daemon is running
docker exec -ti nix-1 /run/current-system/sw/bin/bash -ic "systemctl status nix-daemon | grep Active"

# run nix environment 2+3
docker run --privileged --rm -d --name=nix-2 -v $(pwd)/nix:/nix/ $IMAGE
docker run --privileged --rm -d --name=nix-3 -v $(pwd)/nix:/nix/ $IMAGE

docker ps

# attempt to simultaneously nix-env -i on two containers
docker exec -it nix-2 /run/current-system/sw/bin/bash -ic "nix-env -i emacs"
docker exec -it nix-3 /run/current-system/sw/bin/bash -ic "nix-env -i emacs"

As far as I know docker uses some heuristics and then decides automagically if it should do a bind or a regular mount. As I never really understood the difference between both and usually dockers heuristics worked for me, I didn’t really bother.

Yes, as the database which pathes actually are available lives in /nix and not /nix/store. If this database is missing or corrupt, nix will just assume the store was empty, as the database also serves as proof that pathes have been built correctly. At least this is how I understand it.

@NobbZ According to the docs (and my experience) it’s always a bind mount, at least as long as the volume is backed by the filesystem.

But the docs say - “–volume, -v: bind mount”

1 Like

One thing to be aware of is that some of the downloads are happening during the evaluation of Nix. This is all the builtings.fetchurl, fetchTarball, … Those are stored in the user’s home and will get re-downloaded between containers.

Another tricky thing with sharing the /nix folder is: the container itself shouldn’t ship with Nix installed. Otherwise, it might have references to store paths that are not available in the bind-mounted store. This is something that you generally discover later after a garbage-collection, once things are in production :slight_smile:

3 Likes

This is working (shipping container with nix installed and referencing the store) - I think all that’s required is to ensure that there’s a anti-GC “pin” in place to prevent the system target from being collected… right?

Technically it can work if both the host and the container have the same store paths available. This can be ensured by adding a GC root on the host so that the container paths never get collected.

It means that the host needs to keep the GC root and deployed containers in sync. Let’s say that an updated container image is being shipped. Now the host also have to be upgraded. Should the old imager references be kept around? What mechanism is being used to handle these pins?

The second thing is that the Docker image will ship with a bunch of /nix/store entries, which will never be used because /nix is hidden by the host bind-mount. This is not super critical but it makes the image larger than necessary.

One strategy to work around that is to embed a static version of the Nix binary in the container. Put it in /bootstrap/nix or something. And then bootstrap all of the rest using it. Even the version of Nix that will be used after the bootstrap can be determined by it.

  1. Setup a binary cache
  2. Build and publish the development environment to the cache. Keep note of that store path.
  3. In the container:
    a. Run (exec -a nix-store /bootstrap/nix --realize /nix/store/<hash>-my-env) and add this to the PATH.
    b. Run the rest of the CI commands.

That’s one way to do it that relies heavily on the store. Another way is to ship the container with the bootstrap nix and the nix code that is used to setup the environment. Then replace the nix-store realization with a nix-build.

1 Like

Backing /nix with ext4 or some other bind-mounted storage (other than the container overlayfs) is probably better than including the entire nix store inside the container image.

You might want ot have a look at Arion which is doing just that but on top of docker-compose.

From the Arion documentation:

One of the more interesting built-in modules is the host-store.nix module which performs the bind mounts to make the host Nix store available in the container.

3 Likes

This sounds promising @zimbatm.

Any ideas of a minimal derivation that would create such an image (with nix tooling available at /bootstrap/nix and store realised to bind mounted /nix/store)?

We’re facing this issue now attempting to use the nixos/nix image that already ships with a /nix/store and attempting to bind a persistent (and so far blank) /nix/store.

Alternatively, (as @paralin mentioned) I wonder how to make use of the overlayfs-driver to layer the stores, treating the container’s /nix/store as read-only, but the bound /nix/store as read/write?

You could borrow inspiration from the netboot config: https://github.com/NixOS/nixpkgs/blob/8346dc04b31bc4ade35bd15a72e6cc40c8ac2f73/nixos/modules/installer/netboot/netboot.nix#L39-L70

And this part in the same file is also important: https://github.com/NixOS/nixpkgs/blob/8346dc04b31bc4ade35bd15a72e6cc40c8ac2f73/nixos/modules/installer/netboot/netboot.nix#L106-L108

It uses overlayfs to layer a RW tmpfs store on top of a RO squashfs store.

1 Like