We need more defined guidelines for package inclusion

Currently we have many packages that are unmaintained, and part of this problem is that
we aren’t setting any metric for what makes something suitable for inclusion in nixpkgs.
Other distros like Fedora have guidelines like What can be packaged in Fedora, but I think we’re all aware that NixOS is no where near as strict as other distros when it comes to package inclusion.
But I don’t think we’ve documented at least legal reasons when something should be excluded.

Personally, I ask the following questions when reviewing a PR:

Does this have users?
Is it maintained upstream?
Will it be maintained in nixpkgs?

Typically that will exclude very low impact packages
and abondonware. Best part is you’ll avoid expanding
the Picking up garbage project :smile:

My hope is to get this going through our Mergers thought process and discuss
what our guidelines are and how we can proceed getting those documented.

12 Likes

Topic was catalyzed for me earlier today in

being clear in this is going to help people

1 Like

This is a very difficult issue and it has come up before. I feel like this is the biggest obstacle we have currently. The more packages we add, the harder it gets to actually maintain Nixpkgs. Without any guidelines or rules it is also difficult to motivate what can go in and what not resulting to in times a frustrating experience for both contributors and maintainers.

Aside from Platform Support Tiers we should have Package Support Tiers (Add common packages to Nixpkgs · Issue #60005 · NixOS/nixpkgs · GitHub, User:Profpatsch - NixOS Wiki). Still, having these tiers leaves a lot of room for whether to include a package or not (and in what tier, then!).

I recall discussions regarding Python libraries where there were also opinions that we should only include libraries that are actually used elsewhere in Nixpkgs. While I could understand that view, I disagreed because there are various difficult to package Python packages, and tools like pypi2nix were (and still are) not sufficient in those cases. At the same time, I am hesitant to accepting certain major PR’s with tight dependency requirements, and packages good ways for testing them (Django packages e.g.).

The questions you stated there are good ones to consider.

3 Likes

Are there any statistics on package use? Perhaps binary cache hits as a rough proxy for the popularity of a package?

This could give a rough idea of what sort of packages are actually used.

(I discovered yesterday that Commander Keen 4 is actually in nixpkgs, it seems that everything and the kitchen sink is packaged :wink: .)

2 Likes

We might be able to come up with rules on what constitutes a package that absolutely fits and what constitutes a package that absolutely doesn’t. This would leave a gray area to discussion when a package is trying to be added to nixpkgs. At least that way we could point that a package does/does not belong because X.

1 Like

Determining number of users is difficult: reviewers outside of a domain are not going to be familiar with popular usage enough to evaluate those claims.

I also think it’s unnecessary. Already we accept packages with known low number of users because it’s operationally cheap to do so. EG: package database importers.

I’m also going to consider the next two points as both relating to operations cost. With that in mind, I propose ignoring attempting to determine number of users and instead focus on operations cost. That will likely correlate with number of users but avoids the inherit difficulties in that.

Consider the PR template as a definition of what can is included: That which 1. compiles under sandbox. 2. compiles on other relevant platforms. 3. at least launches… etc etc.

For all packages included in nixpkgs, an update requires a PR. That implies an operations cost to include any update to those packages. New packages also have to pay that cost and add to the maintenance cost.

Suppose we decided that a nixpkgs release has the property: everything works to the degree required in the PR. Also suppose we have a goal to produce a release every X weeks. Given the number of people available to perform necessary operations to ensure a valid release that puts a limit on how many packages can possibly be included: only what can be afforded.

What that is? no idea. Cost varies per package etc etc. But I’d bet investigating operations costs would provide better guidance than user counts.

1 Like

I agree fully. In the end this will be a sustainability issue for NixOS.
I’d be nice if there was a committee of sorts that would be responsible for giving
direction in this issue. Certainly need a dedicated effort.

And I also disagree with having to have python libraries be used elsewhere in nixpkgs for inclusion.
I think the python framework in nixpkgs stands on its own.

In my time maintaining Python libraries, I’ve taken to having abandoned libraries that larger packages unfortunately depend on to being disallowed from global access in nixpkgs.

comes to mind
pythonPackages.infix: init at 1.2 by dmvianna · Pull Request #63075 · NixOS/nixpkgs · GitHub

Wonder what your opinion is on that.

But it is not only about cost. Consider two packages A and B that both have weekly updates and most updates are just a version + hash bump. Every now and then, you have to add/remove some dependencies. Sometimes upstream screws up and you need a patch. These packages have roughly the same operational cost. However, now take into the equation that package A has two users and package B has two thousand users. Given a limited amount of manpower of triaging PRs, isn’t package B is far more important than A?

I fully agree that user counts are hard to estimate.

I am only a maintainer for a small number of packages, but for some of them I have had serious doubts whether to submit them or not. For instance, fasttext is used widely in the natural language processing community. But within the Nix community, there is probably a handful of users at most. Each PR to fasttext will take some time away from people who triage PRs + committers.

Such choices would be easier if there were some strong domain-specific package sets one could contribute to outside nixpkgs. I think this is one weakness of the current NUR – the package sets are ‘partitioned’ by the maintainer. It would probably be nice to have some domain-specific package sets (e.g. natural language processing, bioinformatics, DOS shareware games) besides personal package sets. Such domain-specific sets could build their own trust and wouldn’t have to rely on time from nixpkgs committers. I have toyed with the idea of making a natural language processing package set containing stuff that practitioners use, but would probably be out of scope for nixpkgs, and try to build a community around it. But I am not sure if that will be liked or whether one should aim to upstream everything into nixpkgs.

6 Likes

Nix along with its build and run-time dependencies.

A package that is unmaintained upstream, and especially when the author says it should no longer be used. So e.g

In my opinion that should not go in, at all. If it does happen to go in, it should be marked broken. By now it seems it is maintained though.

It’s cheap to press the Merge button, but the initial reviewing and afterwards maintaining are not. Unless it’s a trivial package with few dependencies.

I think it is fine if someone (or preferably more than one) open a PR to introduce guidelines for in the Nixpkgs manual. It can then be discussed there, and you’ll see how many people are interested in working on it. It may of course happen that a NixOS/RFC will be requested. Anyway, what I am saying is, the best way to get this to happen is coming up with an initial proposal that can be discussed and put in place. As soon as a group of interested people forms and do things, then I doubt anyone will be blocking you.

Absolutely. The suggested sets are I think too specific given the amount of users we have. I would widen it a bit, to say “scientific computing” and “gaming”. If we have an overlay for each that follows the same branching/releasing as Nixpkgs, then I think that could work very well. Discoverability and ease of use are key, and that’s where Flakes may play an important role. I am a bit skeptical towards Flakes providing libraries though but I envision these package sets provide both a Flake and an overlay.

I think it would not be a bad idea to split the scientific computing bits out of Nixpkgs into a separate repo. Core packages would still remain in Nixpkgs (e.g. blas, lapack, numpy) but things like Tensorflow would move out.

Note though, that when setting up these kind of package sets, that we need a good way to set up CI for it as well, like an ofborg instance.

2 Likes

It’s cheap to press the Merge button, but the initial reviewing and afterwards maintaining are not. Unless it’s a trivial package with few dependencies.

When I’m discussing operations cost I’m referring to the full lifecycle of operations costs including sustaining.

(oh that’s how you use the quotes…)

Agreed on both counts. Seems like those 2 requirements are nice, compact and clear.

Maybe we should investigate some process by which “low priority” packages can get rubber-stamp approval by the maintainer (especially if the maintainer submitted the PR)? Especially if it’s just bumping version and hash. This does run the risk of having multi-platform packages end up breaking on whatever platform the maintainer doesn’t use (e.g. maybe it needs a new dependency on darwin, but not on linux), but if it’s truly a package with very few users, this might be fine.

A problem here is identifying which packages truly have few users. Maybe we could use binary cache download statistics (including cache misses of course), but that means we have to accept the package first.

Predicting the future is based on our view of the world and past experience. Any metric that is based on that is going to be hard to enforce uniformly unless it’s done by a single person.

For example if we want to count users, we probably also want to take into account potential future users. If we don’t do that we might remove a package too soon, or marginalize a category of users that haven’t discovered nixpkgs yet. Maybe nixpkgs isn’t used by statistical mathematicians yet and we are missing some packages. Maybe one package appears, then a second until one day most of the packages needed for that activity are available and now the barrier of entry is low enough that it brings in a whole set of users. Every user of nixpkgs has their own and different subset of packages that they care about.

What’s important is that the person who contributes a package is willing to maintain it. This means also that we need the tools to allow them to maintain the package, like automatically mapping issues to maintainers and allowing maintainers to merge PRs for their own packages. If we can do that then the responsibility of triage will be much better distributed.

One caveat though is for library packages. That’s a difficult issue because the maintainer’s responsibility expands over time. Maybe they were willing to maintain the library but now there are 10 packages with different requirements depending on it. Their responsibility is no longer linear with what they contributed, which sux. I don’t have a good answer for that unfortunately.

6 Likes

Objective perspective can be achieved in the integration of every subjective perspective.
My metric is in fact a piece of that, I know it isn’t perfect or accurate in every situation.
It’s meant to make you question.

Thank you for your proactive advice. I’ll see if we can get an issue in nixpkgs to track
this and perhaps formulate a proposal. The issue here for me is that I don’t know where to
start as a guideline because nixpkgs is so diverse. For example, I could much more easily get a guideline for something domain specific in nixpkgs.

Am I to interpret this as a defined guideline for general package inclusion wouldn’t be helpful?

1 Like

Another point to consider is keeping track of which maintainers are active and which are not. I know there are a number of people in the maintainers list who have moved on from nixpkgs but never let us know. If we knew when people moved on we could attempt to find new maintainers for software which needs it.

5 Likes

In fact, I totally bet a start to guidelines is possible. We could use the following as the complete, initial, guidelines without issue:

“Packages that will not be included: A package that is unmaintained upstream. Or stated by the author to no longer be used.”

Accurate but imprecise which is totally fine for a start. :slight_smile: I know people will want a fully precise guidelines, but I don’t think such a thing can be created wholey at once.

I’m definitely hopeful that this effort will have the effect of pushing the tooling for nicely supporting package sets outside of nixpkgs. Sounds like everybody is interested in that as well.

2 Likes

I’ve wondered about this, from outside the bubble. I’m probably not rooted enough to have a long coherent thought on the topic. So this is mostly oblique, addressed more to the maintainability side of this.

But I have wondered how/if the various nix commands could be enhanced to help build big levers out of small nudges. That’s really abstract, so some more-pragmatic examples… (I’m imagining manual commands with configuration options to make the behavior automatic in almost every case here; never default-on; IIRC there’s overlap between some of these and existing helper-bot behavior).

  1. If heuristics can minimize false reports, a command for reporting package build failures in an aggregable way, paired with inline and end-of-output prompts to use them. If the heuristics are tractable, I think there’s more than one lever lurking here.
    • Terse issue reports that reflect the heuristic confidence and collect specific details based on context are probably easier to triage than human-language reports by someone who isn’t quite sure what broke or didn’t copy the right part of the error stack.
    • A system fielding code-reported issues might be able to suggest fixes (i.e., generate a PR and start the testing process before a human has had a chance to triage), or even automatically report some classes of upstream issue when the information is available. Broken packages can be identified sooner.
    • Users running into already-automatically-reported build breaks can be given a URI for the issue thread to reduce duplicate effort on github, here, IRC, reddit, etc.
  2. Same/similar mechanisms might also help triage user errors (again, github, here, IRC, reddit, etc) without burning time/effort/patience of key community members.
  3. When someone/thing triggers a build of a nixpkgs package sourced from a well-known platform with easily-queried versioning (json/xml/rss/etc.) and the package version is out-of-date and there isn’t already a commit/report/PR, this could be automatically reported (in to nixpkgs, and out to the user). If there’s already an automated report on a package update and a new version is released, the issue can be updated with the latest version to save time triaging/shepherding updates for the fastest-moving packages. If it’s out of date but the new version is already in the pipeline, it could just let the user know (whether it’s in a newer commit on their channel, or whether it’s in the next channel update, etc.)
    • This could also generate a PR and start tests. Naively, the same dependencies. With a few adapters (I’ve been meaning to start a separate thread…) this could be at minimum sensitive to the fact that the files that might play a role in defining dependencies for a given stack have changed.
    • If these are derived from activity rather than imperatively generated by bots, the velocity and workload will (hopefully?) be better balanced around what people are using rather than purely on whatever publishes most often.
  4. A manual command for reporting issues with a package that built fine but isn’t running correctly. Maybe this is a command for wrapping the whole execution and trapping output. These probably don’t auto-open issues without meeting a threshold number of reports unless heuristics say it smells like a common nix-ecosystem error pattern. Optimistically, something like this (if it produces real/hard errors), could even record the syntax and turn it into a test.
  5. If reported back, versions explicitly-built from sources with a knowable version-format probably represent (maybe in context with general cache hits/misses?) some signal about how people use a package. It’s not worth rat-racing a package if the sanctioned nixpkgs version isn’t significantly more common than others once overlays/shells/etc. are accounted for. It could help triage towards packages where the nixpkgs version is clearly canonical.

I have some other thoughts, but I’ll hold off for tomorrow.

“Packages that will not be included: A package that is unmaintained upstream. Or stated by the author to no longer be used.”

There is a ton of packages that are either definitely unmaintained upstream or theoretically maintained with no commits, that just work and keep working. I would actually trust some of these more because of that.

Software maintenance is a symptom of problems, not something desirable per se.

But «likely to have dangerous problems discovered with no expectation of upstream fix» is probably indeed something to avoid (not all packages can realistically have meaningful hidden dangerous problems in them by now)

4 Likes

Software maintenance is a symptom of problems, not something desirable per se.

I don’t want to get too deep into the philosophy of software maintenance here, but I think this helps us only in the best of all possible worlds, and we’re probably not living in that one. Here and now, the overlap between “is actively maintained” (or at least: if a problem somehow pops up, an issue can be opened and will likely be dealt with within a foreseeable time-frame) and “it’s easy to see that this is a package that can be include without too much hassle” is almost complete.

Now, there are totally some packages out there that are only updated very irregularly upstream (maybe only 1-2 commits a year) and those should definitely be allowed to be included. As a first gauge, at the time of the init commit for a new package, that’s still a pretty good heuristic though. If somebody wants to include a package which hasn’t received any upstream development for a long time, I think the burden of proof is then on the would-be package maintainer to convince some reviewers that it’s include-worthy.

We won’t be able to come up with rules that will work for every single instance. There has too be some wiggle room. We also won’t get it right for every single package. But having some rules of thumb might help a lot. (A list of checks for new packages, akin to the checklist for every PR right now, might be enough already. A lot of PRs get merged without fulfilling every single check, and package inclusion would work similarly. A rough set of guidelines should do.)

1 Like

Sorry. What I wanted to get across is how the forces are being directed. Instead of enforcing policies, ask questions to the package submitter and let him make a decision for himself. It might sound like a detail but humans have a tendency of creating policies to solve problems, and this is where the community starts to deteriorate. Hopefully I am not going too meta or anarchist :slight_smile:

Making a guideline to inform the contributors is a good idea.

I think people submitting new package should ask themselves:

  • Are they willing to maintain that package or is it just something thrown over the fence? If they aren’t willing to maintain the package, it’s going to become dead code quite quickly.
  • Is upstream healthy? If there are no stable releases, the project might be too young. If it’s unmaintained it’s going to be more work for you.
  • Is the package going to be useful to others? If not, it’s fairly easy to maintain a private overlay (and binary cache).

Assuming that the submitter is not an asshole or psychopath, I think it’s fine to leave the answer of those questions to them.

8 Likes