Any reason not to share a read-only `/nix/store` mount between servers?

Let’s say I have a production system with multiple network application servers for redundancy,
and the application they run is deployed in the /nix/store. Is there any reason to duplicate the
store between all the servers?

What would happen if all the application servers had the nix store mounted as read-only, while the rest of /nix was of course, per server i.e. not shared.

Would this be easy to setup? Does it have any potential pitfalls? Theoretically, the nix store paths are read-only, so it seems as if there’s no problem, and yet I haven’t come across a case of such a deployment, and I wonder why is that.

Thanks for the help!


I am in a similar situation (10+ servers which need identical nix setups) and decided not to put nix store to a shared drive. My reasons are:

  1. Speed. Local disks are obviously a lot faster. You WILL notice the difference.
  2. Accidents. If something happens to the nix store on a shared drive all the servers are dead.
  3. The sqlite database nix uses is notoriously not shared drive (specifically NFS)-friendly. YMMV.

I ended up using one of my servers as master and wrote a simple script that rsync’s /nix (so that the default profile with all the applications is also shared) to the other servers which I run whenever a change to nix setup is made (and I like the result).

1 Like

@alexv Thanks for your response!

You raised up valid points.

The idea with rsync is also nice.

Currently I have a script that does a nix-store --import on each of the servers.

@psychic are you running a HPC environment? During last NixCon I heard that this was the approach a few of them took. Have a single “master” machine that holds the store and then share /nix/store as a read-only NFS volume. I believe the machines would run another distro such as CentOS so the store isn’t needed during boot.

Potentially it should also be possible to mount a secondary store and then use the --store /mnt/nix option for the user-land packages.

See also this old thread: Redirecting to Google Groups

I am running a small HPC system on NixOS and I was facing a similar question when setting up the system. Despite several reports that exporting the nix-store via NFS can work I could not get comfortable with the idea of tying a central part of the OS to NFS (mainly of all the reasons mentioned by @alexv).
Instead I chose to de-centralize the nix store: My master/login node acts as a cache for the compute nodes (via nix-serve). When I run “nix-build -A” , “nix-shell -p”, or “nix-store --realise” for a specific path or package I get it automatically on the compute node from which I am running the command. In that way there is no need for replicating the whole store for all nodes and I do not need to keep track of a sync status.

1 Like

I had run Nix at multiple HPC systems with the store located in my home directory that was mounted either over NFS or over GPFS. I imagine, it should also work fine over AFS. This works well for read only, I’m doing quite a bit of distributed computing with it. Unfortunately it is not possible to write to the nix SQLite database from multiple nodes at the same time – that will quickly result in a corrupt database. Because of that I always do write operations from one specific node.

I am not running an HPC environment, but it seems like those who do have replied :slight_smile:

Thanks everyone for your input on this case!

For now I’m going with the no-share internal-cache option, but I will be bookmarking this link for when I need to make the choice again.

About NFS: I have sucessfully run a dozen of remote builders on the same nix store on nfs with the following patch to nix: Nix Installation Guide - NixOS Wiki


Are they running on NixOS?

No they were on another distro (probably some variant of rhel, but I don’t remember for sure).