Hi,
I wouldn’t use any rolling release for productions servers. For
personal computing, I think it is fine. I am honestly saving time
compared to having to tweak and configure my OS. I have other more
important things (for me) that I want to work on. I want more
configuration that macOS or Windows, but not as much as needed for
bare Arch Linux or NixOS.
Being an admin I can’t agree: you save time as long as nothing brake
a thing that happen, and happen not so rarely, more important it can
happen at any update, while on classic release model you know that
something might and probably brake but at a specific time, and you do
the upgrade when you have spare time for that.
In an idea world thing might be different but in the real world a
system that can be kept up to date indefinitely without breakage
is not there. No OS is the world, proprietary or FOSS, altogether.
The main advantage of NixOS for me is simple: when something breaks
I can recover quickly with an almost fully automated deploy. I can’t
forget a package, a parameter in some config in /etc, it’s all in few
text files in a single place, easy versioned. Essentially the very
same thing we do for years from personal scripts to modern
orchestration tools like SaltStack or Ansible.
When you deploy a system by hand you can’t remember always and it’s
really unlikely that default distro perfectly match your needs and
desire. If you update once an year, I mean a fresh-install per year,
at least you “install skill” remain current, you know your desktop
enough, doing it less frequently means have you desktop in a semi
unknown (forgotten) state. And when a disk breaks you have to start
over again, loosing far more time than the little NixOS keep up and
loosing it on-the-spot, not when you are ready and free.
Windows is a big pain in that sense because it normally breaks at any
point in time and it’s automation, while PowerShell now can be viewed
like a kind of NixOS-config, it’s extremely hard, with absurd design
choices and long works needed for any stupid thing. I do not know
the state of OSX automation but since I do not see it in an enterprise
context I suspect is even worse.
Counting on a system that “can last longer” and push it until it crash
is a recipe for disaster, not time saved.
It fit my needs well, because I just wanted to wipe a computer clean,
and boot into a linux distro as a daily driver without fuss. Maybe
that isn’t your case. I personally am not going to dual boot anything,
I’m not going to make partitions, I just need one, etc. I just want to
get to work on my other projects.
I do not have dual boot of any kind, but I need partition for backup and
sharing purpose: for that I use zfs, because it offer a damn good
“rampant layer violation” that makes anyone life easy, with a common
storage pool, with effective snapshots, compression, checksumming,
live-resilvering etc. With it I can keep my data, not only my system in
a good shape. I can backup and share portion of my home easily, I can
view in different ways etc. Classic POSIX filesystems are a thing of the
past, too limited for today’s need. And in the past we also have going a
bit further, for instance with Plan 9’s file system in userspace
concept, that’s now the ubiquitous FUSE.
Partitioning is not much a system thing, it’s a “data” thing. Relaying
on “the cloud” (someone else computer) to backup and data availability
is like putting money in someone else pocket to avoid being robbed. It
might work for a certain period of time, when the service works well,
after one’s pay it so much that there is no recovery. Also relaying on
very few disks counting they do not breaks it’s another recipe to a
disaster.
Most GUI installer default partitioning relaying on a single home
volume, generally in ext4 these days or even worse in btrfs. Both have
a long trail of failure, especially btrfs. Big volumes with frequent
write like a browser profile dir easily breaks and having all your data
means a full restore is needed. Also proper backup is hard. For ext* fs
one’s need lvm to have snapshots and they are painfully slow, for btrfs
we have snapshots but they are really uncomfortable to use, if you use
stratis you relay on an experimental spaghetti-code crap built to
satisfy programmer’s desire while trying to have something that scale.
It does not work properly as many things simply because devs, especially
these days, tend to ignore the rest of the world scale, only focusing on
their personal desktop and not the unknown developer who read “how to be
a genius in 24h, a God in a week” but top-profiles devs like Linus
Torvalds who in the past say to be a terrible admin barely capable of
maintain his desktop and recently criticize zfs “because it does not
perform so well in some benchmarks”, or Andrew Morton years ago when he
say that’s a rampant layer violation. Sorry to all devs here, but
systems need to scale and sometimes interaction with admins is needed
and operation is needed, DevOps, CI/CD pipelines etc are not an answer
but only Babel’s tower trying to push a failed idea/dream.
Declarative config is not a bad idea. However with NixOS there’s in
fact multiple ways to achieve the same thing for NixOS configuration,
and that can be confusing and some ways lead to worse results than
other ways.
That’s a good point, IMVHO the ability to install package via CLI should
not be there, it might be useful sometime when one’s do not want to do a
full rebuild just to add a package but perhaps a temporary nix-shell for
that is a better way.
I think that still requires potential ELF patching or FHS mocking.
It’s not ideal.
That’s due to the limit of present time storage, a relic from the past
no one seems to be able to update properly, the last one who try was
SUN with zfs, SGI (far) before with XFS. BCacheFS is still more an idea
than something usable, Hammer is only a DragonflyBSD thing, nifls2 is
essentially a dead project and they are only a small step ahead in
storage tech. Storage-package management-installers are intimately
coupled and trying to separate them have a price. Nix{,OS} use the
/nix/store concept to circumvent storage limitations and ELF patching
and symlinking exercises are the price to pay. At OpenSolaris time IPS
start timidly to describe a package manager integrated with zfs but it
was in a very early stage, only BE (boot environment) concept mostly,
so instead of create a network os symlink you simply have a zfs clone of
the root volume with the relevant changes, storage is deduplicated, no
need of a store and symlinks but also no declarative config at that
point, it can be added and some devs plan to add them with the new
OpenSolaris installer that never really happen (they give birth to
Caiman instead, SUN was already finished and so OSol project, IllumOS
devs do not have the manpower to go further). But that is the sole
clean and proper way: having “classic POSIX file access” decoupled from
real storage, only as a view, different views, of the actual datas.
Unfortunately I think Nix devs can’t really go in that direction it’s to
much work for the size of Nix{,OS} project, a far bigger community and
far bigger resources are needed to make it is a time short enough to
avoid being pushed into oblivion like GNU Hurd and other projects.
Also certain “hw” and “Gnome” issues I also see myself and do not see
by defaults on Ubuntu and others are IMO due to the same reason: the
size of the community. Not “big enough” to have enough “users
bugreports” to tweaks all those small annoyances… It’s legit to be
annoyed but it’s not a Nix* fault, merely community size, manpower can
be compensated a bit with technology, but only a bit…
BTW sorry for the long post and poor English.
– Ingmar