I’ve configured it on my desktop NixOS machine, because it has a powerful CPU for building local nix packages, and it works great - but obviously since it isn’t on all the time, that’s not ideal.
I’ve been thinking of improving this by moving the binary cache to a permanently running VM on one of my servers, but then it won’t be ideal for building local nix packages, as my server CPUs are anemic compared to the desktop.
I wanted to ask for some strategies for using the nix binary cache and building packages using a host which is not itself the binary cache.
One potential solution I thought about is to have the binary cache on some sort of fileshare (NFS/SMB), but I hear this can be potentially problematic, so I came here to ask.
Nix supports this natively, you don’t need to do anything funny on the server side with fileshares or anything.
If you want to just build on your desktop, but upload the build results to a remote cache, you can set up a build hook to automatically push build results to the remote.
You can specify a remote builder with the --builder flag if you have a host that you can push builds to remotely.
If you wrap nix, you could even make this conditional on whether your build server is on the network and online, with a bit of a delay due to ping or whatever else you use to detect your host.
Incidentally…
nix-serve is the reference implementation, and suggested by the manual, in case you weren’t aware - many people seem to think that serving a nix store is complex and requires attic or suchlike.
Of course, harmonia works and does the job well, but it isn’t the “default” setup. This is mostly a PSA.
a build hook - lets the client upload builds to a target host’s nix store
the –-builders flag - allows other hosts to perform builds for the client
I can see for a simple use case I could just use the build hook to upload builds to the remote nix cache.
For a more robust solution, I suppose both should probably be combined so that:
all hosts have a local nix cache example_cache configured with a higher priority
all hosts have a remote builder example_builder configured for building, say x86-64_linux
the example_builder has a build hook configured, so that builds get automatically uploaded to example_cache
I suppose the last one is kind of optional, as the benefit would be pre-populating the nix store in the example_cache without having to wait for hosts to trigger builds on the example_builder.
Is my understanding correct?
This sounds fantastic. Would you mind expanding on this?
During my initial research into the topic of self-hosting a nix binary cache, I found nix-serve, nix-serve-ng, harmonia and attic as options.
I saw some complaints about the reference implementation being slow and harmonia being a nix-community project made by Mic92 made it a quick choice for me based on those factors. I didn’t think to long or too deeply about it.
But based on what I’ve seen so far, I think nix-serve would be perfectly fine for me as well.
Pretty much, yes; Only, the last step isn’t optional, since the builder’s cache may be on a shorter garbage collection cycle, or the builder might be offline.
It’s best to always use them separately, unless the builder and cache are the same machine.
Somewhat pseudocodey example using nushell:
def --wrapped main [...args] {
let ping = ping -c 1 buildhost | complete
if ping.exit_code == 0 {
# Run nix with the build host configured
nix --builders 'ssh://buildhost x86_64-linux' ...$args
} else {
nix ...$args
}
}
If you use that instead of the nix command, it should automatically fall back to local builds.
I don’t actually know if this is necessary, tbh, quite possible nix just decides the builder is offline and builds locally, but the ping is probably quicker than nix timing out for every derivation.
Hold on, I think I missed something. Does configuring the build hook on example_builder allow for sending (caching) remote builds to example_cache when initiated via –-builders from other hosts?
I was under the - perhaps wrong - impression that configuring the build hook only pushed builds initiated from the host where the build hook itself was configured.
Bit off topic - what are your thoughts on nushell? I’ve been interested in it for a while now, but as a Windows Powershell user trying to adapt to the Unix world, I sidelined it to get more acquainted with unixy shells first.
AIUI configured build hooks also run when the nix daemon is used as a remote builder.
I avoid using POSIX-ey shells wherever I can these days. They’re riddled with footguns and insane syntax designed for what people thought was good UX - and saved screen space and keystrokes - in the 1980s.
I use Nushell mainly as the most usable replacement I’ve found. I like its pipeline concept, and it has a lot of very useful built-ins, while still being fairly terse; as much as I don’t think limiting keystrokes should be the main design goal, it’s still important for a shell to maximize expressibility.
I find however that its match expression is a bit underdeveloped, and there are still a few areas where things feel imperfect (e.g. lack of file descriptors, and other aspects of more complex output handling).
It’s also not had a full release yet, so it still changes in backwards-incompatible ways. While that means details like the above may eventually be cleaned up, this will probably bite me eventually.
Still, it’s usable enough, and simply not having to write bash is worth the tradeoffs to me.
The only real alternative would be oils, which is python-based, and I avoid using python-based software; it’s a bit of a quality smell, usually results in noticeably slow runtimes, and the dependency/build mess is simply not worth it.
Powershell was also on my radar for a bit, but it’s too alien in the unix world, and I find it both less expressive and not significantly less riddled with poor choices than bash & co.
Manual nix builds work, but when other hosts try to use the remote builder during a nixos-rebuild it generally causes the nixos-rebuild to fail - probably because of the large number of paths being copied to the remote store.
This is the post-build-hook I’ve been using:
#!/bin/sh
set -eu
set -f
export IFS=' '
CACHE_HOST="host.nix-cache.cyn.internal"
echo "Signing paths" "$OUT_PATHS"
nix store sign --key-file /var/lib/secrets/harmonia.secret "$OUT_PATHS"
if $TOOL_PING -c 1 $CACHE_HOST > /dev/null 2>&1; then
echo "Uploading paths" "$OUT_PATHS"
nix copy --to "sssh-ng://tk@$CACHE_HOST?ssh-key=/home/tk/.ssh/id_ed25519" "$OUT_PATHS"
else
echo "Ping to $CACHE_HOST failed, skipping upload." >&2
fi
It’s making me wonder if this is a viable idea - copying all the paths makes the nixos-rebuilds take a long time.
Is it possible to ensure that nix store paths built on the remote builder get successfully copied to the cache server, without affecting the duration of a nixos-rebuild and making it more reliable than the script above?
I can always build my custom packages in a way where they are always uploaded to the nix cache server and that would satisfy a large part of the desired utility, though it would also be nice to cache everything built on the remote builder on a cache server - and have hosts hit the cache server instead of the official NixOS cache first.
Never mind… I’ve solved the problem mentioned in my post above, and have arrived to a solution that I like, though it was not what I had originally wanted.
The solution is to use task spooler within the post-build-hook script, which prevents the nix copy commands from blocking a nixos-rebuild:
#!/bin/sh
set -eu
set -f
export IFS=' '
CACHE_HOST="host.nix-cache.cyn.internal"
if [[ -n "''${OUT_PATHS:-}" ]]; then
export TS_MAXFINISHED=1000
export TS_SLOTS=10
echo "Uploading $OUT_PATHS"
printf "%s" "$OUT_PATHS" \
| xargs ts nix copy --to "ssh://tk@$CACHE_HOST"
fi
I’ve decided against using the remote builder for all of my homelab hosts, however - I don’t think that’s necessary, and while my build host can handle it, the VM I’m using for the nix cache can not.
I didn’t realize until some testing, but populating the nix cache with receiving nix copy from a fresh nixos-rebuild on a host makes the nix-daemon go absolutely wild - is extremely expensive and taxing on the CPU.
I was not even able to populate the cache initially because my VM had a pitiful 2CPUs and it seemed frozen. I tried harmonia’s new rust-based nix daemon which sped up the process dramatically and made it possible, but I still ended up assigning two additional CPUs for a total of 4 to make it faster.
I think this is fine as it is for now, here’s the relevant parts of my nix config:
I’m still curious, if there would be better strategies of caching for a homelab - the idea was to have the nix-cache server to be this anemic weak VM with a bunch of storage. It’s possible that some of the issues I’m having are harmonia-related, but I’m suspecting that I’d have similar problems with the other binary cache solutions.
Well, thanks for sharing that experience. I’ve not run into these issues, but always been running caches way overspecced because I combine them with build hosts.
It’d be interesting to benchmark this to figure out what’s actually causing the stress.