Just like the title says. Internal storage means the storage media cannot be easily removed from the target machine and mounted to another computer. When the closure size is greater than half of the storage space,
You may not be able to build the closure on the target machine, as the build closure (plus build cache) would normally exceed the total storage space.
You may not be able to push a new closure (full update, like after staging merge) to target machine when it’s running the previous closure, as that will exceed the total space, too.
In this case, what is the best practice to deploy and maintain it without doing complicated tasks repeatedly?
If you can use a file system with transparent compression, like btrfs or zfs, I find the nix store consistently gets compression ratios between 1.5x and 2x. ZFS can also do transparent deduplication, though the performance impact is very significant.
If you use it with a filesystem with extent sharing like btrfs or xfs, it effectively gives you storage + network deduplication on the nix store at a small chunk level (64 or 256 KB), which saves a ton of space for a set of similar system closures.
I don’t have a ton of stats yet, but in one experiment, a system closure upgrade that used 47% additional space with file-level dedup only used 13% with chunk-level dedup.
(Caveats: it only works for substituted paths, not locally built; you need to set up some special infra; there’s support for on-demand fetching of contents but you can ignore those parts)