Good filesystem for the nix store

I am wondering what a good choice of filesystem for the nix store would be.
The nix store seems to me to have a somewhat special write pattern (data is never overwritten).
I guess nilfs2 (NILFS - Wikipedia) might be a good choice.


Possibly (I don’t have experience/knowledge). I’ve been using btrfs typically, which also uses non-overwriting approach (by default), and it seems relatively common among nixers (and generally more popular than nilfs, I think). But even plain ext4 is quite fine, I believe… at least unless you want some of the extra features that ext4 doesn’t have.


I garbage collect the store somewhat irregularly, so transparent compression support in ZFS and btrfs is really nice (I don’t know if NILFS supports transparent compression). The compression ratio of the root volume on my current machine:

% zfs get compressratio /
NAME              PROPERTY       VALUE  SOURCE
zpool/root/nixos  compressratio  1.88x  -

But even plain ext4 is quite fine, I believe… at least unless you want some of the extra features that ext4 doesn’t have.

Key missing feature: dynamic allocation of extra inodes…

(Nix store can get symlink-heavy with some use patterns, so it is possible to run out of inodes without being even close to running out of space)


I wonder how to enable transparent btrfs compression for /nix/store? Compression flag +c is silently ignored by nix with any new file or directory inside /nix/store.

1 Like

Can I ask why you have chosen to add decompression latency to your access when disks are relatively cheap these days, compared at least to processors. (Curious, not arguing)

lz4 decompression speed is ~5GB/s on modern machines.

Whatever the machine is doing with the data is probably going to be more expensive than decompression.


I format my disks with xfs and pass “-m reflink=1” to mkfs.xfs to enable CoW for data. I tried many filesystems before including ext4, btrfs and f2fs, but they were all insufficient in one or another way.

  • ext4: doesn’t support reflinks
  • btrfs: sometimes destroys itself with no way to recover the partition
  • f2fs: needs a long time to perform fsck after kernel updates; eats files that are open when the computer has power loss

I automatically run the gc on all my systems so space does not become an issue so easily.


I’ve never noticed and thus never investigated. I suspect it might be a side-effect of Nix juggling with read-only remounts, unsharing, etc.

I observe that /nix/store btrfs compression property gets inherited for locally built store paths as opposed to substituted from binary cache ones.
As evidence, grub2 2.02 wasn’t able to boot NixOS after I added an out-of-tree (locally built) kernel module to boot.extraModulePackages because of unsupported zstd compression of init and initrd files. grub2 2.04 successfully reads zstd compressed files.

1 Like

I can confirm the same issues with f2fs, I think I have thread about it on discourse actually.

Thanks; let me open a ticket: btrfs compression ignored · Issue #3550 · NixOS/nix · GitHub

1 Like

I saw there is this workaround to BTRFS not compacting /nix The way to activate this workaround is with this option: nix.extraOptions = "preallocate-contents = false";.

But if I activate it I see this warning: warning: unknown setting 'preallocate-contents'

This option is not available in NixOS unstable?

if you do a lot of PR reviews, you will likely have a lot of similar files. So zfs with compression+dedup actually works pretty well:

$ zpool list nixstore
nixstore  1.81T   807G  1.02T        -         -    43%    43%  3.06x    ONLINE  -
$ zfs get compressratio nixstore/store
nixstore/store  compressratio  1.91x  -

close to 6x compress*dedup ratio

EDIT: I should also mention that dedup takes up a massive amount of RAM, so I wouldn’t do it without good justification


In Btrfs from time to time I force compression with this command:

sudo btrfs filesystem defragment -r -v -czstd /nix

And dedup:

sudo duperemove -drv /nix
1 Like

Wouldn’t Nix’s auto-optimise-store option make dedupe unnecessary?

let me check, it will take a while to run. But I think dedup runs on a per block instance, where as optimise just hardlinks files… And I’ve had cases where optimize reported negative savings before as well

1 Like

True, but I wouldn’t expect block level dedup to occur very often unless the files the blocks are from are identical.

Yea, but when reviewing PR’s (including those targeting staging), it seems to be pretty common.

1 Like

I catalogued my zfs usage here:

In short, despite autoOptimise on, I had a dedup of 1.35.

(I’ve turned off autoOptimise now)

I should also note that in my system of 8GB ram, nix-env --install irrespective of size of derivation are regular but nixos-rebuild takes my RAM usage from 30% of 98% and the system comes to a stand-still. I am not sure how to explain this …