How many people are paid to work on Nix/Nixpkgs?

FWIW there is (was?) an RFC for implementing SELinux and in my opinion this (or alternatively apparmor) could already add some „security“.

I suppose RFC 41 is what you mean. I would say “was”, unfortunately.

A recent Discourse discussion of sandboxing or constraining software for security was thread 7792, but it doesn’t seem to have gone much of anywhere.

Re: PR Workflow Automation…

This bot experience is really nice! Implement csi templating on select attributes by blaggacao · Pull Request #25 · cert-manager/csi-driver · GitHub

  1. It talks to me
  2. Is actionable, explains itself
  3. Has a ok-to-test stage :smile: - our human intuition - but before taxing testing infrastructure too much.
  4. EDIT: it removes comments which are acted upon
  5. EDIT: uses traffic light label color codes for immediate signalling

Klicking a little thorugh this bot’s background, I found this for automatic mergin in k8s: Tide Status (It’s based on gihub search queries based on labels - elegant!)

Also interesting, as a really comprehensive, well organized hub: GitHub - kubernetes/community: Kubernetes community content (@doronbehar Would you accept to be co-author on a RFC I’m redacting, proposing to adopt a similar community coordination hub for nix?)

Automerging: https://github.com/kubernetes/test-infra/blob/master/prow/cmd/tide/pr-authors.md

1 Like

I had to do something out of this discussion.

Therefore, I decided to prepare a RFC.

However as the RFC process suggest, I’d invite interested party to share their feedback and thoughts on my private fork’s PR previously to submitting a PR to NixOS/rfcs. Thanks in advance!!

https://github.com/blaggacao/rfcs/pull/1

It went official.

https://github.com/NixOS/rfcs/pull/74

Thanks for all the preliminary feedback.

We’re looking forward to the feedback to come! If you wish, feel free to be creative in the Brainstorming section. The section’s purpose is to stimulate imagination and get you on board!

2 Likes

Considering my original question of this thread. Now I must say it feels great to contribute in this kind of period - where we are close to a scheduled release and we have a release manager (:heart: @jonringer & @worldofpeace). You are a Benevolent dictator, but not for life so perhaps this setup is perfect for NixOS :slight_smile: .

4 Likes

We discussed package tests very often and the current best practice is to use passthru.tests.

https://github.com/NixOS/nixpkgs/issues/73076#issuecomment-552046457

Here is an example where i used it: phoronix-test-suite: add tests by davidak · Pull Request #75117 · NixOS/nixpkgs · GitHub

I like to test --version, because you not only know it runs, but it’s actually updated.

My vision is to automate package updates, builds and runtime tests as much as possible. So review is just looking if the PR diff looks sane. Fully automate nix-update · Issue #13 · ryantm/nixpkgs-update · GitHub

The next step is that i document it and then we can encourage contributors to add tests. But i do many things and have limited time… feel free to help!

I’m also working on tests for font packages: Add buildFontPackage with test by davidak · Pull Request #93533 · NixOS/nixpkgs · GitHub

2 Likes

I have daydreamed a little about whether it’d be helpful to have a system for nudging package users about things they use with a low-friction way to collect feedback, and a parallel project to find UI/X wins that leverage that feedback.

I think I’ve written about this at more length before, but some meaningful things in that category might be:

  • collect meaningful test cases from people at (or near) the point/time of use
  • notify people when something from nixpkgs in their system/user config or a nix-shell doesn’t have a maintainer and help them adopt it
  • nudge people to mark themselves as known-users (would need some GC mechanism?) that a maintainer can ping for feedback/information
  • report build failures automated transparent collection of relevant platform information, and maybe some light troubleshooting (“It looks like you have sandboxing configured; does it build if you add --option sandbox false?”)
  • likewise, report this-is-working-to-the-best-of-my-knowledge signals that may help with triaging or even self-service help with some classes of problem (i.e., providing a way for a maintainer or even end user to quickly see if anyone’s reported this version as working on the same platform; “I noticed failed to build; you can run nix hobgoblin <store-path> to see if anyone has had better luck with it.” or even, “Fair warning: no one on your platform has reported successfully building this, yet. Is it okay for me to automatically report whether it worked or not?”)
3 Likes

Due to Cachix funding done in open source spirit (also known as bootstrapping funding) and due to my personal savings I’m able to spend 100% of my work time on Nix and Cachix.

My mission is to bring Nix ecosystem to en masse.

Each customer using Cachix contributes to making that sustainable, mostly due to ease of setup and not relying on never GC-ing your cache and using Cachix as a real non-permanent cache. Lots of small teams appreciate that!

I believe the major obstacles lie within documentation and infrastructure, something I’m focusing on for the last two years and will continue doing that for years to come.

17 Likes

Would it make sense to make a generic test where we could specify how to get the version (like with --version and maybe a regex) to avoid having to put a test.nix file with every package?

Sure

We can create a test-version.nix with the parameter as input that can be used in any package, together with more complex tests.

2 Likes

I know myself and many others appreciate your work.

One of the big blockers to using nix at my work (besides bus factor) is the “risk” concerns around how supported it is. Products like Cachix and integration into other ecosystems like github actions helps demonstrate that it’s a valuable platform.

Being able to easily “show” value, goes a long way to adoption.

7 Likes

As an outsider and out of curiosity, is there a way to profit in some way of the security effort that other distro’s maintainers do while they review the same sourcecode for their distro that in the end is packed for nix, too?
It is like waiting a week or two (waiting if someone else finds security issues), but actively communicating about what others find.
Maybe this would end up in a lot of overhead, but from a security point of view, knowing how many eyes (maybe also who’s eyes) have reviewed code is a good way to estimate the risk of malicious code not being discovered.
It would be like: ok maintainer of Debian, Arch, Fedora didn’t find anything, so it should be save to package… This way, the effort between the distros would be shared.

1 Like

I don’t think there’s any official platform that type of discussion between package repositories.

But generally if there is an issue with a package, then most nixpkgs contributors will let the upstream source know about it. However, this is usually related to packaging.

From a code review perspective, I can’t speak for others; but I do very little code review unless there’s regression or behavior change in a program. I will usually review the changelog if it’s a popular package (or causes many rebuilds).

Nixpkgs does have a form of automatic CVE labeling through a partnership. Issues · NixOS/nixpkgs · GitHub

Limited knowledge but I don’t know if package maintainers review application code itself in any distros as a matter of general process. Most code they review is the build related instructions or additional patches for old software not found working.

I like the idea of having some ‘trust indicator’ for automerging new upstream versions.

Example

Nixpkgs contains foo-3.4.1
Upstream project ‘foo’ releases foo-3.4.2.
To minimize human intervention to merge this into nixpkgs, let the bot check if (e.g.) Debian sid also has this version. If Debian does, then this is one of the (multiple) factors that could lead to an auto merge.

1 Like

This is probably the main reason that I’m still a ‘user’ and not a ‘contributor’.
Other than the hurdle of having to create a Github account to contribute, just seeing the number 2.3k pull requests feels as a ‘red flag’

2 Likes

I think people thinking these numbers are big are mistaken! We are about to cross 100k issues + PRs. So another way to look at it is 4.4k + 2.3k outstanding out of 100k equals only 6.7% outstanding issues. Another way to look at is we merged 1,711 PRs in the last month, so the backlog is only about 1 month big.

Also, don’t forget that NixOS is a complete Linux distribution.
image
For comparison, here’s a screenshot from Ubuntu’s bug tracker.

This is exactly the kind of sentiment we need to figure out how to dispell so we can encourage more helpers! Any ideas what would convince you @kvtb?

15 Likes

I consider prominent and massively useful label automation (not necesarily merge) would help combined with a dashboard. And I think you are right, the answer is to be sought affter in the eye of the spectator.

I had the exact same impression about the number of PRs as @kvtb, but was simply not aware of comparable numbers or the global view. The latter certainly helped dispel the impression, but I suspect it’s main origin is how GitHub presents those numbers in the first place. I‘m not sure though if issue or PR count (or how many per month are closed/merged) is very informative about the momentum of nixpkgs. My gut feel is much of it is automated bumps and thus noise. It just reflects how up to date the given package collection is. So I concur with @blaggacao that some kind of evaluation of the labeling would better reflect what is actually going on; new packages, bugs fixed, refactorings, architectural changes, …

1 Like

I think another nice way (though an abstraction) of how we are doing is the Repology freshness graph:

https://repology.org/graph/map_repo_size_fresh.svg

Current screenshot below. We have the package set with almost the largest number of packages (AUR has a bit more) and we have by a wide margin most fresh (up-to-date) packages. Though it would be nice to have a ranking of freshness per package. PRs do linger sometime, but as the graph shows, we are doing really well in adopting packages and maintaining them.

Anyway, we can debate things until the end, but the primary way to do even better is to go out and actually review PRs. We could could probably avoid most of the backlog if we had a dozen more people reviewing 5-10 PRs per day. Automation can solve a lot of our problems, but not every problem.

Also, I spent some time a while ago going through the long tail of old PRs, and many of them are not simple version bumps (though those slip through the cracks as well), but have substantial changes that require someone familiar with the software/domain to review.

8 Likes