Contributor retention

Part of this is, I think, documentation. The only mention in the nixpkgs manual, in the meta attribute section, is thin on how you’d write a test from scratch or what would be meaningful. Most of the examples in nixpkgs are simply selecting tests from nixosTests, so it takes a bit of hunting to get a sense of what patterns are “normal”, let alone best-practice.

I recall this coming up in the course of Improve a pkgs' tests for nixpkgs-update, but it doesn’t look like that produced anything yet (I think mostly because it was focused on the bot, and broke down around where the docs should go?)


No objection to bounties, but I also wonder if test-writing would feel more meaningful if we had:

  • a meta attribute for marking packages that are well-tested enough for updates to be auto-merged if they build and passthru passes
  • a standard for deciding when packages are well-tested enough to grant this attribute
  • a process for deciding whether a package is sufficiently tested to meet the standard?

In the short-er term, I imagine test-writing would be more intellectually rewarding (than update/review work) if the result was a significant improvement in the maintainability of a package (and the knowledge that, at scale, you’re helping rebalance what human maintainers spend time on?) Maybe @jonringer has a sense of whether that feels less Sisyphean?

Additional levers could be applied…

  • nudging people who use under-tested packages to submit real test cases
  • requiring them on new packages
5 Likes