Best practice to deploy a 18G NixOS closure on a 32G internal drive?

Just like the title says. Internal storage means the storage media cannot be easily removed from the target machine and mounted to another computer. When the closure size is greater than half of the storage space,

  1. You may not be able to build the closure on the target machine, as the build closure (plus build cache) would normally exceed the total storage space.
  2. You may not be able to push a new closure (full update, like after staging merge) to target machine when it’s running the previous closure, as that will exceed the total space, too.

In this case, what is the best practice to deploy and maintain it without doing complicated tasks repeatedly?

2 Likes

Is netboot/PXE an option in this case?

I recently saw a blog post by Carlos Vaz that went into PXE boot on NixOS.

2 Likes

I was running NixOS on Raspberry Pi CM4 with a 8GB eMMC for a while. Closure size was ~5GB. What I found that kinda worked:

  1. Machine was deployed very often to minimize the size differences between generations
  2. System was built on a different machine
  3. Very aggressive garbage collector
  4. Journald in RAM
  5. Impermanence

Occasionally I had to disable some packages that had big closures, deploy, collect garbage, enable the packages back and deploy again.

In the end it was too much hassle and I had I still needed that machine, I would probably use the PXE route. Maybe with iSCSI to store data.

2 Likes

If you can use a file system with transparent compression, like btrfs or zfs, I find the nix store consistently gets compression ratios between 1.5x and 2x. ZFS can also do transparent deduplication, though the performance impact is very significant.

3 Likes

This may be a stretch for your use case, but you could take a look at a project I’ve been working on: GitHub - dnr/styx: alternate nix substitution

If you use it with a filesystem with extent sharing like btrfs or xfs, it effectively gives you storage + network deduplication on the nix store at a small chunk level (64 or 256 KB), which saves a ton of space for a set of similar system closures.

I don’t have a ton of stats yet, but in one experiment, a system closure upgrade that used 47% additional space with file-level dedup only used 13% with chunk-level dedup.

(Caveats: it only works for substituted paths, not locally built; you need to set up some special infra; there’s support for on-demand fetching of contents but you can ignore those parts)

1 Like