Trust model for nixpkgs

@yakman2020 I think in your case, it would be beneficial to establish some steps to determine what “trusted” is.

I don’t think it will be feasible to “trust” the organic nature of nixpkgs from a security perspective. But I think it is feasible to audit all the build inputs and make a determination that the “correct upstream sources” were used to build this software.

There’s been several times in my past where I’ve proposed something and the decision makers already determined the outcome, they just didn’t want to blatantly say “no”.

1 Like

Maybe the CI could or maybe it already does some basic security checks on committed nix code?

Has the hash of the fetch changed? but not the version?

Has the https:// changed, and the hash, but the version remained the same?

is this excessive ‘non nix code bash/python/etc/etc’ in the commit?

As nix is a DSL , there is a chance for it performing mischief, but not as much as you think, and more importantly not keep well hidden… (deep in some shell script or dreaded binary).

Again, not everything is as critical in nixpkgs… some things are very critical.

I think if anything, Nix makes you think about these things like trust…, it make you have a good idea how software ‘is built’, assembled together and how much trust has to be in the chain.
It’s only when you see how much ‘trust’ goes into building a binary package…that you wonder why it works as good as it does.

where as many distro’s it’s a case

of

apt-get update
apt-get upgrade

and you forget about all this, and make a coffee.

I vote for a nixos ‘chaos branch’, where anyone can directly commit , just to see what happens ;-);

2 Likes

One way to reduce the risk would be to define a number of subsystems and start using codeowners feature of github to manage access to specific subsystems instead of the whole repo.

This approach will allow to define different rules for subsystems and will help to grow community of contributors more easily & safely.

2 Likes
3 Likes

Thank you and sorry for my ignorance, that already looks pretty good!

Maybe one question to ask is: what subsystems are the most security-sensitive? And then set up a safer process for them?

That would hopefully improve trust in nixpkgs ecosystem without slowing down develompent too much.

2 Likes

We use the GitHub CODEOWNERS feature to notify people who care about subsystem changes. Also, crucial systems have tests that run as part of the release/channel-update CI mechanisms.

1 Like

I love the answers here that suggest that you come back to the team and company leadership asking them to describe why specifically they trust Ubuntu.

With nixpkgs, you can literally rebuild the entire tree in your own ci. You could re-create any package, and build and cache that. You can maintain your own package cache. The arguments for nixpkgs are nearly always better in the end.

Usually the resistance comes from people who want to avoid learning and dealing with nix. I’d just outright ask people if preference is the real root issue. Because objectively using nix and Nixos over Ubuntu or redhat can be argued in a way that will convincingly illustrate nix/Nixos is superior.

The major issue is entrenched habit, which is reflected in market preference.

Even execs in major companies can’t just decide that .debs for example are lame and we should offer nix expressions instead. Must consumers and even server folks are just coming to terms with ubuntu.

One can argue that nix expressions should be supported in addition as a parallel path.

Nix* ecosystem shall build up universal policies and (easily auditable) workflows in line with their values and aggressively market them.

Then it will be easy to expose (real) threat scenarios of non-reproducible package ecosystems and blame them. (Ken Tompson’s original “Reflections on Trusting Trust” — turing award lecture is indeed a good thing also for managers to digest — it exquisitely instills fears about how fundamentally flawed software security is now a days.)

Once those policies in place, it will be hard for the opponent to credibly dismantle nix’s high security aspirations by citing poor workflows and auditability.

The preceived benefit must outweigh the percieved cost. So the key is to know how to play those perceptions! (— not the truths)

1 Like

@yakman2020 I’d suggest, that in the case you’d wish to project some kind of lasting vision onto the nix* ecosystem to address this well-known issue, you might choose to constitute and organize a Special Interest Group (SIG) — e.g. SIG Trust model. — here is a template to get organized

The vacancy at SIG Trust model is already felt and it will be challenging to keep SIG Workflow automation focused in this circumstance.

The “hypocrite commits” paper seems to have disproven that :anguished:

Even though this is rare, individual distro committers doing bad things is a valid concern IMHO:

  • One of FreeBSD developers was hacked a few years ago, which gave hackers access to the servers, and now we have no idea if files downloaded by FreeBSD users in the span of a few months were altered.
  • Debian is currently feuding with one of its (now former) developers, who (allegedly) sent emails impersonating other developers, and also set up quasi-Debian web sites like debian.community that look pretty insane.
2 Likes

Given that IEEE S&P PC effectively admits that the existing review process is inadequate to reliably judge trustworthiness of such papers, I think it is better to not say that this paper has proven or disproven anything.

Investigation of these patches revealed that the description provided by the authors in the paper is ambiguous and in some cases misleading. The experiments do not provide convincing evidence to substantiate the main claims in the paper and the technical contributions of the work need to be revisited.

On the other hand, I do not object to the object-level claim that staged review in Linux kernel is still not reliable enough. My evidence is however more along the lines of Linus Torvalds sometimes receiving subsystem pull requests with code that he has reasons to suspect

has never seen a compiler

LKML: Linus Torvalds: Re: [git pull] drm for v4.11 - main pull request (this is from before )

Or if you want things that got released, there is always

https://lkml.org/lkml/2018/12/22/153

reporting a thing that Linus Torvalds has later called

is a serious breach of trust

https://lkml.org/lkml/2018/12/22/221

And lack of impersonation or false pretenses or even understanding of malice by anyone involved helps keep low my expectations that «just» having a robust enough review flow is enough… for anything except slowing things down.

My familiarity this this drama is superficial, but the IEEE statement says:

Three other patches are buggy patches, but they were not accepted by the Linux developers for various reasons, not necessarily related to the bugs themselves.

On the other hand, the original paper says they weren’t trying to get the patches merged:

Fortunately, there is a time window between the confirmation of a patch and the merging of the
patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.

IEEE S&P PC (IEEE as a whole shows no signs of having an actual position) calls some of the claims in the paper «misleading». I think now, after this paper accidentally got a more close technical review than anyone ever bothers to do (and the review found claims misleading), it is better not to take the paper’s claims as proof of anything.

Fortunately the paper doesn’t reveal anything we had not seen just from looking at LKML. And unlike intentional experiments, looking at regressions lets us see the process how things slip all the way into mainline releases.

It’s just that if you dig a little bit deeper, like why they are calling them that, it looks like they themselves are misrepresenting the authors’ goals (see the quotes I posted above).

And then I go to LKML and it looks like authors might even be underselling their success when saying the precautions prevented the bugs to be actually released:

Re: [PATCH] SUNRPC: Add a check for gss_release_msg - J. Bruce Fields (see quotations of upthread messages)

So the claims might be misleading in both directions. And then why reference the badly done paper and not just primary evidence which is all in public.

I haven’t seen any evidence that the researchers’ claims were misleading. Just the claims by IEEE (or affiliates) that seemed misleading.

And then why reference the badly done paper

Everyone calls their work “the hypocrite commits paper”. You are welcome to use a different term, if you think you’ll be better understood, but I doubt that you will be.

Well, I believe claims ib LKML that the group has let some of the net-negative commit slip into stable more than claims by both the researchers and by any IEEE-affiliated body, and these do put doubt on any descriptions of the methodology by the researchers.

Or just references, erm, abundant observational evidence and lets that work be sent to oblivion, which by now seems to be in the interest even of the authors, too.