Nixpkgs core team update 2026-05-11 – automation/AI policy feedback and new members wanted

It’s been a while! Nixpkgs core team events since our last update:

  • After discussion with the community team bootstrap group and within the Nixpkgs core team, we’ve proposed an initial policy covering the use of automation for Nixpkgs, including LLM‐based AI tools, for community review.

    This is an area where people understandably have strong views, and which goes beyond technical matters to touch on legal and ethical concerns. It’s unlikely we can achieve complete consensus among contributors on the topic, and we’re willing to make judgement calls as necessary for the benefit of Nixpkgs, but we want to start out with a baseline policy that we think can gain strong consensus.

    Therefore, we’ve focused on formalizing existing norms around automated contributions and applying them generally to include LLM‐based AI tools, ruling out what we think contributors will widely agree are clearly unacceptable cases: undisclosed use of complex automation, and automated contributions submitted without any manual review or understanding. The hope is that this will also give us more visibility with which to iterate further as necessary.

    The core of the proposed policy is:

    Every contribution to Nixpkgs and related development venues, including code, documentation, and communication on GitHub and Matrix, must have a responsible person in the loop who is accountable for that contribution and reviews it before submission, and must transparently disclose any non‐trivial use of automation to produce it, including but not limited to LLM‐based AI tools.

    This policy takes inspiration from similar policies in LLVM, Mesa, Fedora, and the Linux kernel, along with a proposal by the author of Anubis. We’re also following the Rust project’s work on a more elaborate, stricter policy.

    The pull request has (much) more detail and rationale. It’ll remain open for public feedback until the end of the week. We’re happy to hear concerns or suggestions, either on the pull request or in private.

  • Following on from our previous announcement, we have delegated a formal software provenance team with authority to make decisions regarding package provenance, SBOMs, software supply chain tracking, and related topics, and an initial mandate to shepherd design and implementation of a coherent vision for this in Nixpkgs. We look forward to seeing their work progress and helping facilitiate it as needed.

  • We participated in the security response and audit process for GHSA-67f2-674w-6g63, an incident where a committer’s GitHub personal access token with push access to Nixpkgs leaked in a public repository. Thankfully, after a comprehensive audit, we have comprehensively established that there was no compromise of the Nixpkgs source code as a result, with further details in the advisory. However, this incident has exposed systemic risks in the security of our GitHub infrastructure; efforts to address thesee are being tracked in NixOS/org#246.

  • We’re sorry about the marketing emails from GitHub many people received after the Enterprise Cloud upgrade. GitHub manually unsubscribed all members of the NixOS organization from these on our request in early February, so these should have stopped.

  • After our last update, our founding member @wolfgangwalther sadly left the team.

    As we’re now down to two members, recruitment is a high priority for us to keep the team sustainable and help us flesh out our decision‐making, membership, and delegation procedures for the long term. We will start to reach out to potential candidates soon, and want to hear from you if you’d like to nominate yourself or someone else.

    Candidates must be Nixpkgs committers, and a history of good communication and facilitation is of course a big plus. We generally work asynchronously over Zulip and don’t have constraints about time zones or meeting schedules, but given our consensus decision‐making we do need members to have enough availability to participate in discussions.

    We plan to interview a shortlist of candidates and hope to announce new members by our next update. In the meantime, we apologize for the slower responses and updates; we have been busy with other matters and focused on keeping the lights on until we had the time to recruit in earnest.

38 Likes

I will use this to say some stuff about GHSA-67f2-674w-6g63 :smiley:

For anyone wondering which committer caused all of this, it was me!

I take full responsiblity for this token leak, but I thought I could give some
insight into how this happened and, more importantly, what I learned from this.

How was the token leaked?

This happened back when I was running K3s on my home server. At my workplace we
use Kubernetes and deploy our workloads using FluxCD for
a GitOps-based approach. I loved
this so much back then that I wanted this at home as well. It didn’t age well
due to the amount of work needed to maintain the deployment manifests (imagine
having to maintain every NixOS module for your homelab services), but that’s
a story for another day.

Due to the declarative nature of GitOps configurations, you will quickly run
into a similar issue when trying to configure secrets for modules on NixOS. In
Kubernetes-Land there are of course many options to choose from. I chose
Sealed Secrets back then, as
it seemed fairly easy to encrypt secrets without having to maintain the
certificate.

This is where things went downhill. See, the easiest way to use Sealed Secrets
is to create a regular Kubernetes Secret manifest and then “seal” it using the
kubeseal utility. The tool will take your Secret and generate a SealedSecret
manifest with encrypted contents, that will then be decrypted on the cluster.
This seems like a solid way of doing this, as it’s actually quite similar to
how we do it in NixOS-Land with agenix and
sops-nix. But there is one catch:
kubeseal never modifies or removes the original Secret manifest. That means
that, unless you have a .gitignore
entry

that would make Git ignore these files, there is a risk of accidentally
commiting this file to your publicly viewable Git repo.

Well, as you can imagine from this long exposition, I accidentally committed
the source Secret manifest to the repo due to a simple mistake. Instead of
calling it secret.yaml.orig, I called it secret.orig.yaml which wasn’t
covered by my .gitignore file. So it just took one distracted code change to
accidentally commit this file, which seems to have happened on
2023/07/04
.

These are base64-encoded secrets and tokens and therefore effectively
plain-text.

Note that all the secrets mentioned here had been rotated back when this
vulnerability was reported.

What can I do to prevent this?

Nothing really. Mistakes happen, and we are all merely human. Though humans
aren’t the only ones leaking tokens nowadays!

I learned from this that the best I can do is reduce the impact. First, it’s
always a good idea to set an expiration date for a token. Rotating tokens
should be a very simple operation in any system, and if it isn’t, then that
should be worked on. In my case, I have limited all my GitHub secrets to
a maximum lifetime of 365 days, and much less for ones that are very easy to
rotate and/or rarely used. This not only reduces the impact of an old and
unnoticed leaked secret but also gets rid of unused secrets over time.

Additionally, the primary issue here was the permissive scope. The leaked token
was a classic one issued with the public_repo scope. Before this token was
reported, I was not aware that public_repo included write access to all repos
my account can write to. As the token was only used for increased API rate
limits, I have replaced it with a fine-grained token that only has read access.

TL;DR

I leaked a token years ago due to human error. Now all my tokens have an expiry
date of at most a year and much more limited scopes. Don’t use GitHub classic
tokens unless you really have to.

19 Likes

Extremly reasonable :+1:

1 Like