CVE-2024-3094: Malicious code in xz 5.6.0 and 5.6.1 tarballs

The nixpkgs derivation does not rely on cmake so this change does not affect it.

In addition to the original OSS Sec post, blogposts detailing how the known backdoor payload is added have been published today, see Home · Midar/xz-backdoor-documentation Wiki · GitHub and xz/liblzma: Bash-stage Obfuscation Explained - gynvael.coldwind//vx.log (it’s an interesting read anyway). That should help to reproduce and understand the process yourself if you want gain confidence.

For packages using pre-built binaries, I have checked (using GitHub - delroth/grep-nixos-cache: Finds strings in a large list of cached NixOS store paths) the Linux x86_64 packages of the current nixpkgs unstable channel available in the cache. I have not been able to find a match for the known backdoor payload.


12 posts were split to a new topic: Reconsider reusing upstream tarballs

General PSA that if you are scanning your system for potentially malicious binaries, running ldd on them is not something you should do.


Guix has grafts precisely for this purpose.

Can’t we have something similar? What roadblock as would there be for implementing and then utilizing such change?


That actually does exist - see the nixos-unstable-small channel. You can see on that nixos-unstable-small updated 11 hours ago to (at this time) 35fde99980eb. And you can see more details if you click through to the hydra job; it builds pretty often.

However, the commit still has to reach master before it any channel even starts trying to pull it in, and the revert #300028 was queued only into staging (because it triggers a mass-rebuild and they didn’t want to block CI for everything else behind it). And it unfortunately didn’t build in staging on the first try (because github took down the repo) so now it’s got to come (with a fixup) through staging-next (see xz-5.6.x is trojaned · Issue #300055 · NixOS/nixpkgs · GitHub).

Going through staging seems defensible given the analysis that nixpkg’s build didn’t actually trigger the vulnerability, but the revert hasn’t taken the fastest path into nixos-unstable-small (which would have been to merge direct into master, blocking the nixpkgs-unstable queue for quite a while). I don’t know of a way it could have been sent only nixos-unstable-small and staging, without triggering mass-rebuilds for nixpkgs-unstable.


We shouldn’t wait for the rebuild, xz, libarchive, all related packages should be removed/marked insecure and pushed to master.
Let people rebuild what they need themselves.

There is reason to believe that older versions are possibly compromised too.

Yeah, you can also check it by enabling the backdoor condition against a problematic binary:

$ nix build -f '<nixpkgs>' xz.out --out-link before

$ nix build --impure --expr 'with import <nixpkgs> {}; (xz.overrideAttrs (oa: { env.RPM_ARCH = "x86_64";})).out' --out-link after

$ ls -lh before/lib/ after/lib/
-r-xr-xr-x 2 root root 258K Jan  1  1970 after/lib/
-r-xr-xr-x 2 root root 210K Jan  1  1970 before/lib/

40 KB difference. And one of then contains malware _get_cpuid calls:

$ nix-shell -p binutils-unwrapped
$$ diff -u0 <(nm --format=just-symbols before/lib/ <(nm --format=just-symbols after/lib/
--- /dev/fd/63  2024-03-31 10:02:48.977464772 +0100
+++ /dev/fd/62  2024-03-31 10:02:48.977464772 +0100
@@ -27,0 +28,2 @@
@@ -28,0 +31 @@

Thus by default at least those 40KB of malicious payload was not included in nixpkgs.


Why do you think that we should remove or mark as insecure all related packages without waiting for the rebuild?

1 Like

It is not as simple as that. As already mentioned before, xz is part of the bootstrap binaries through that stdenv which means you cannot even write a simple string to $out without first compiling multiple gccs if you have no cache hits. On my system I would need to rebuild close to 14k packages and some of them won’t even compile on my laptop unless I reduce max-jobs to 1.
If the revert would be just pushed to master, no PR created/updated in the next couple of days would succeed its CI checks and if it was pushed to nixos-unstable, then we would prevent people updating their systems.


(Point of order: could the “what kinds of source distribution should nixpkgs accept” discussion please go to a new thread? It’s unrelated to the current remediation efforts.)


14k store paths, why does everything depend on xz anyway? How long would it there to build it for you? Is it not worth not having the vulnerabilities?

10 days is a very long time for such a critical security update, and we need to be better for the next time this happens.

xz is part of stdenv (along with a few other tools). It’s needed to unpack .tar.xz tarballs. Virtually everything depends on stdenv. nix why-depends might help you figure they details:

$ nix why-depends --derivation nixpkgs#mc nixpkgs#xz

I would estimate it to be at least a couple of days if not a week and I would need to baby sit it and restart a couple of times and that with multiple remote builders available that have 96 cores and 256 GB RAM.

Since the vulnerability is not exploitable at this point, it’s not for me.

This is just a not best case estimation. It is hard to estimate this but it should totally only take 4 days or 6 or 8. It is really hard to tell.

1 Like

It seems that the main problem here is not that xz (the program) is included in stdenv, but that the xz binary in stdenv is provided by the same derivation that provides liblzma.

I wonder, if it would be possible to make stdenv only expose the binary build tools, without exposing the associated libraries. That way, in the future we could first quickly update liblzma (rebuilding only the packages that actually link to it via buildInputs etc) and then update the xz binary used by stdenv at a later date.


I’m not sure I understand. xz does link against (it’s own) liblzma:

$ lddtree `which xz`
/run/current-system/sw/bin/xz (interpreter => /nix/store/1rm6sr6ixxzipv5358x0cmaw8rs84g2j-glibc-2.38-44/lib/ => /nix/store/yyqzw7xvsrn3h2zrvincbs1b291yzx8c-xz-5.6.1/lib/ => /nix/store/1rm6sr6ixxzipv5358x0cmaw8rs84g2j-glibc-2.38-44/lib/ => /nix/store/1rm6sr6ixxzipv5358x0cmaw8rs84g2j-glibc-2.38-44/lib/

If we are to fix liblzma we should relink xz as well.

1 Like

Yes. My point is that we don’t need to update the xz binary that is used in stdenv (or at least not right away), if we ensure that it’s only used during isolated builds. Even if the xz binary is technically using the “vulnerable” liblzma library, it doesn’t really matter since it can extract .tar.xz files just fine and the build process is isolated anyway.

We should have one version of stdenv.xz (and other build tools) that only exposes the binary, that is used only during stdenv builds (we might even want to enforce that it’s path doesn’t appear anywhere in build outputs) and another “nixpkgs-wide” version of xz that would be explicitly used by derivations that actually need xz (either to link to or to call the xz binary at runtime).

1 Like

Unless you have a more systemic fix you’re thinking of, this is really just over fitting a fringe/one-off event.


The maximal version of what I am proposing is a systemic fix. Basically, split all the dependencies of stdenv into two groups “build tools” and “libraries provided by default”. Enforce that the first group doesn’t appear in the build outputs. Reduce the second group as much as possible (ideally, just core stuff like glibc).

If we do that, then the next time a vulnerability is found in one of the packages used by stdenv, we can quickly rebuild only the packages that actually link against or reference the vulnerable package and then bump the stdenv build tool version at a later date.


Aside from the complexity of the implementation how much do you expect to gain from such a change? What the rebuild decrease would you call a net benefit for that? 2x? 10x? 100x rebuild speedup?

Let’s imagine we can isolate liblzma. What would it take to rebuild packages against new liblzma outside stdenv. My silly grep against currently running systems reveals the following direct users of the library:

$ fgrep -Rl $(nix path-info -r /run/current-system) 2>/dev/null | tr '/' ' ' | awk '{print $3}' | uniq


They all better be updated if we update a vulnerable library, right?

To simulate liblzma update I tweaked each package individually locally to change their output hash and ran $ ./maintainers/scripts/ HEAD^ to get the rebuild counts in nixpkgs:

Most popular are:

  • elfutils: 36058 x86_64-linux rebuilds
  • libunwind: 7790 x86_64-linux
  • python: 66773 x86_64-linux
  • libxml2: 50197 x86_64-linux

python rebuild is probably a full nixpkgs rebuild.

Looking at the numbers above I would say it’s not worth the complexity of fiddling with xz outputs. We will rebuild most things anyway. Even if xz was not in stdenv. It’s used enough outside.

Note that there are more libraries (like pcre2) that cause stdenv rebuild. I don’t think holding those back just for stdenv will be any benefit rebuild-wise either.