That actually does exist - see the nixos-unstable-small channel. You can see on https://status.nixos.org/ that nixos-unstable-small updated 11 hours ago to (at this time) 35fde99980eb. And you can see more details if you click through to the hydra job; it builds pretty often.
However, the commit still has to reach master before it any channel even starts trying to pull it in, and the revert #300028 was queued only into staging (because it triggers a mass-rebuild and they didn’t want to block CI for everything else behind it). And it unfortunately didn’t build in staging on the first try (because github took down the repo) so now it’s got to come (with a fixup) through staging-next (see xz-5.6.x is trojaned (CVE-2024-3094) · Issue #300055 · NixOS/nixpkgs · GitHub).
Going through staging seems defensible given the analysis that nixpkg’s build didn’t actually trigger the vulnerability, but the revert hasn’t taken the fastest path into nixos-unstable-small (which would have been to merge direct into master, blocking the nixpkgs-unstable queue for quite a while). I don’t know of a way it could have been sent only nixos-unstable-small and staging, without triggering mass-rebuilds for nixpkgs-unstable.
We shouldn’t wait for the rebuild, xz, libarchive, all related packages should be removed/marked insecure and pushed to master.
Let people rebuild what they need themselves.
There is reason to believe that older versions are possibly compromised too.
It is not as simple as that. As already mentioned before, xz is part of the bootstrap binaries through that stdenv which means you cannot even write a simple string to $out without first compiling multiple gccs if you have no cache hits. On my system I would need to rebuild close to 14k packages and some of them won’t even compile on my laptop unless I reduce max-jobs to 1.
If the revert would be just pushed to master, no PR created/updated in the next couple of days would succeed its CI checks and if it was pushed to nixos-unstable, then we would prevent people updating their systems.
(Point of order: could the “what kinds of source distribution should nixpkgs accept” discussion please go to a new thread? It’s unrelated to the current remediation efforts.)
xz is part of stdenv (along with a few other tools). It’s needed to unpack .tar.xz tarballs. Virtually everything depends on stdenv. nix why-depends might help you figure they details:
I would estimate it to be at least a couple of days if not a week and I would need to baby sit it and restart a couple of times and that with multiple remote builders available that have 96 cores and 256 GB RAM.
Since the vulnerability is not exploitable at this point, it’s not for me.
This is just a not best case estimation. It is hard to estimate this but it should totally only take 4 days or 6 or 8. It is really hard to tell.
It seems that the main problem here is not that xz (the program) is included in stdenv, but that the xz binary in stdenv is provided by the same derivation that provides liblzma.
I wonder, if it would be possible to make stdenv only expose the binary build tools, without exposing the associated libraries. That way, in the future we could first quickly update liblzma (rebuilding only the packages that actually link to it via buildInputs etc) and then update the xz binary used by stdenv at a later date.
Yes. My point is that we don’t need to update the xz binary that is used in stdenv (or at least not right away), if we ensure that it’s only used during isolated builds. Even if the xz binary is technically using the “vulnerable” liblzma library, it doesn’t really matter since it can extract .tar.xz files just fine and the build process is isolated anyway.
We should have one version of stdenv.xz (and other build tools) that only exposes the binary, that is used only during stdenv builds (we might even want to enforce that it’s path doesn’t appear anywhere in build outputs) and another “nixpkgs-wide” version of xz that would be explicitly used by derivations that actually need xz (either to link to or to call the xz binary at runtime).
The maximal version of what I am proposing is a systemic fix. Basically, split all the dependencies of stdenv into two groups “build tools” and “libraries provided by default”. Enforce that the first group doesn’t appear in the build outputs. Reduce the second group as much as possible (ideally, just core stuff like glibc).
If we do that, then the next time a vulnerability is found in one of the packages used by stdenv, we can quickly rebuild only the packages that actually link against or reference the vulnerable package and then bump the stdenv build tool version at a later date.
Aside from the complexity of the implementation how much do you expect to gain from such a change? What the rebuild decrease would you call a net benefit for that? 2x? 10x? 100x rebuild speedup?
Let’s imagine we can isolate liblzma. What would it take to rebuild packages against new liblzma outside stdenv. My silly grep against currently running systems reveals the following direct users of the library:
They all better be updated if we update a vulnerable library, right?
To simulate liblzma update I tweaked each package individually locally to change their output hash and ran $ ./maintainers/scripts/rebuild-amount.sh HEAD^ to get the rebuild counts in nixpkgs:
Most popular are:
elfutils: 36058 x86_64-linux rebuilds
libunwind: 7790 x86_64-linux
python: 66773 x86_64-linux
libxml2: 50197 x86_64-linux
python rebuild is probably a full nixpkgs rebuild.
Looking at the numbers above I would say it’s not worth the complexity of fiddling with xz outputs. We will rebuild most things anyway. Even if xz was not in stdenv. It’s used enough outside.
Note that there are more libraries (like pcre2) that cause stdenv rebuild. I don’t think holding those back just for stdenv will be any benefit rebuild-wise either.