Nix deployments can be very bandwidth-intensive, and in certain deployments such as spacecraft or other very remote systems this can become a major hurdle.
By computing the delta between the desired deployment state & what already exists in the Nix store on the host we can drastically reduce the bandwidth required to push update closures.
Features
Computes deployment deltas using a content-defined chunker, but also with higher levels of deduplication such as file & directory level
Deploy either to a remote Nix store using nix-store –import or to a binary cache
Docs
If you want to try this out or want to know more about the internals check out the docs:
Yay, back to the roots! In the old day where any Hydra failure blocked all binary substitutions, there were also binary deltas offered between consecutive iterations of build-the-world.
Hi @adisbladis , this looks very interesting! May I ask, do you have any benchmarks for how much bandwidth is saved on average for normal operations e.g. bumping a flake lock?
It is of course impossible to say exactly how much deduplication saves as it depends heavily on your particular closures. YMMV.
I’ll use my personal laptops closure for demonstration. This wasn’t just a nixpkgs bump but a bunch of other stuff as well.
All sizes are specified in bytes.
Old closure size: 28476465120
New closure size: 28745549040
Nix path-info delta: 9002791952