Filesystem recommendations

Note that this issue does not exist in this form in btrfs as its extent size (~recordsize) is dynamic. Writes produce an extent the size of the write (modulo page size); if you only write 4K, you only write a 4K extent.

btrfs does have significant metadata writes overhead though, about an order of magnitude more than others.. This doesn’t matter much for high throughput applications as the overhead shouldn’t scale with the throughput but it does matter for low-throughput applications.

1 Like

Yea, that’s important. Though, before anyone assumes ZFS can only write recordsize-sized records, note that it can write smaller records (modulo sector size), but only for the last record of a file. This is hugely important for small files though, as you can imagine writing a 1M record for all your tiny files would be rather wasteful both in space and IO.

At a higher level, these filesystems are definitely designed to not eat your SSDs - it’s not 2010 anymore, there has been plenty of time for both SSD hard & firmware to mature as well as for filesystems to realize they’re the main mechanism for storage.

The manuals for both btrfs and zfs cover the quirks quite well if you’re concerned:


The storage pool code now attempts to disable COW by default on btrfs, but management applications may wish to override this behaviour. This is now possible via new cow element.

1 Like