Quality control for packages?

A simple suggestion: extend the meta.broken field to something that represents the the level of maintenance that this package receives. So we could differentiate between “I make sure this package builds” and “I extensively use this in my day-to-day work and thus test new versions throughly”. This would also help for PR review process: if the package level is just “it compiles”, then I can feel safe to merge it if ofborg says its fine. While if it says that the maintainer does extensive testing, I would just leave it to them (this level of maintenance should also come with some level of availability of course or even require multiple people listed as maintainers).

1 Like

Thanks for pointing to this discussion:


I’ve wondered if the Nix ecosystem might be able to leverage the knowledge and activity of its userbase better with fairly simple tools composed thoughtfully?

I wrote several specific examples out initially, but I think it risked obscuring the forest with the trees. It’s not one big idea–just a lot of small ones. Most are about identifying useful knowledge/information in the ecosystem, finding low-friction, privacy-respecting ways to collect/aggregate it, and ensuring it makes its way to where it’s useful (for maintainers, contributors, and users). Some themes are managing user expectation/frustration with transparency; using focused, well-placed nudges to collect information or solicit help, and making a package’s context more readily available to maintainers/contributors.

Some broad categories:

  • Let people using an [at-risk] package know that it’s [at-risk], that the community could use their help making sure it’s in good shape, and give them low-friction ways to give feedback that are context-sensitive to the reason(s) it’s at risk.

    I can imagine subtypes/reasons like: low-traffic, no maintainer, inactive/inattentive maintainer (new upstream releases going a while un-addressed), marked as broken, not marked as supporting their platform, recently-updated (with a history of problems after updates), has reported problems, dependency recently updated with a history of breaking after it does, has an outstanding PR/commit that marks it broken or gives it a more restrictive platform…

  • When a package is missing something (tests, tests that exercise a specific dependency, a useful description, a known changelog file, its homepage domain no longer works, etc.), solicit actionable help from people when they use it.

  • Use heuristics to identify common local override patterns (upgrades, downgrades, adding/removing inputs or feature flags, etc.) that imply something about what’s in nixpkgs, ask the user to confirm the hunch, and either pass the information on or nudge them to contribute it.


This approach is exactly to the point: Lowering the threshold for constructive interaction.

I also like your examples in general. The devil lives in the detail though. Having just one of the simplest mechanisms done well would be worth a lot.

I find the possibilities interesting, which makes it easy to go overboard on speculative detail in an effort to convey some sense of the possibility space and all of the ways they could be enhanced/built on.

Some simple examples that are actionable with information available now and without adding some way to collect/aggregate data are:

  • Let people know when they build/use a package that has no maintainer. A heavy-handed take can encourage them to become a maintainer. A subtle take could gate the call-to-maintainership with questions like how familiar they are with it, how much they use/like/depend on it, whether they known what a git is…
    • The call-to-action may start as a simple message, but a later enhancement might ask if they’re willing with options like: [yes–I know how; give me the path and I and will take care of it], [sure–but walk me through it], [ask me again later], [no]
  • Let people know when they build/use a package with no tests.