Today marks the day where I’m throwing disko-zfs into the open. The synopsis is that it allows you to manage ZFS datasets declaratively, either directly using a JSON schema, or if you’re using NixOS using Nix.
You simply need to give disko-zfs a schema and it will ensure that your ZFS pool matches. disko-zfs will synthesize a list of commands to execute, which if executed will make everything match up. However it will never actually execute any dataset destructive commands, the only things it will do are creating new datasets and setting/unsetting properties.
Numtide has been using this tool internally for a month now and so far it works great. But, always make sure it will do what you think it will do before applying the new configuration, oh and backups, don’t forget backups. You have been warned!
The development of disko-zfs has been done under contract from Numtide so a big shout out to Numtide! For more details see the accompanying blog post and the repository itself.
32 Likes
Just yesterday I was rambling on Discord on how I would want to see a ZFS specific Disko that can actually create new datasets (and delete them but in hindsight I’m glad this isn’t implemented).
I thought that going further declaratively than the initial partitioning on installation for all of Disko might not be in scope but it could work if the focus were to be just on ZFS.
And apparently my wishes were heard so thank you for creating this and thanks Numtide for making it possible.
I’m very exited to try this out!
1 Like
Just yesterday I was rambling with him! Super ironic that we were discussing this and then this appears. Thank you Numtide for another good looking project.
2 Likes
Wonder if it would be a good idea to do a zpool checkpoint beforehand, to make sure any mistakes made are reversible.
6 Likes
Wonder if it would be a good idea to do a zpool checkpoint beforehand, to make sure any mistakes made are reversible.
Hm, maybe also running the output as a channel program, so that the whole thing is one transaction, hmm
2 Likes