Should we consider a deprecation/removal policy for Nixpkgs?

Quoting @Profpatsch on the GitHub issue:

No. You can’t just delete software people depend on as their daily
driver based on a skewed idea of deprecation.

I do not know any deprecation policy in Nixpkgs. Should we consider it?


No. You can’t just delete software people depend on as their daily
driver based on a skewed idea of deprecation.

Well, taken to the extreme, we should just keep all software. Unfortunately
software needs to be maintained. Especially if ecosystems like python are involved; as they are very fluid and prone to upstream breakages. (Yes, lilyterm is written in C but still depends on packages which depend on python 2).

I do not know any deprecation policy in Nixpkgs. Should we consider it?

I’m not aware of one. And I think it would be really difficult to achieve.

For one, it’s impossible to gauge how important a particular package is to someone.
And deprecation policies are a massive burden on some maintainer. And unless
there’s a system in place, a lot these efforts usually get dropped. Just take a look
at how many TODO and FIXME we have in nixpkgs which were made 3+ years ago.

For a deprecation policy, we could use lib.warn. However, that will cause evaluation failures if it’s part of a normal evaluation.

We could also “deprecate” packages by putting them into aliases and then using a lib.warn for users. But this isn’t a great solution in practice; as will likely only be meaningful for leaf packages. It would be weird to get a warning of openssl 1.0.2, but don’t reference it specifically. Also, putting packages into aliases will make them unusable for non-aliased downstream dependencies, which is also awkward as you would need to then put all downstream packages into aliases as well.

Once a package gets placed in aliases, then it also ceases to get evaluated by ofborg, or collected by nixpkgs-review. Effectively meaning that it can be broken at any time, and core maintainers will be unaware of the breakage. Personally, I think it would just be better to put a throw "this package is abandoned/unmaintained/broken"; and call it day.

On the issue of lilyterm / libvte:

Speaking as a python package codeowner, the python ecosystem is a major pain in the ass to maintain it to just be up-to-date, let alone do deprecation policies. I could easily sink 40 hrs/week just doing python maintenance alone. These forceful prunings of the python ecosystem are of direct consequence of upstream cpython and the surrounding ecosystem abandoning it. It should be of no surprise to anyone that these changes will occur someday. For python2, people have known since 2008 that it will be the last python2 release, there was plenty of time for projects to adapt.

Also, in the example given, you can adopt a non-abandoned terminal emulator, we have
~30 of them available in nixpkgs. Find another, or fork the abandoned upstream.

The antagonist rebuttable of “No. You can’t just delete software people depend on as their daily driver based on a skewed idea of deprecation.” is “You shouldn’t incorporate abandon-ware into your daily workflow and expect it to be there forever”.


The way I read the discussion about the above PR, the main cause of frustration was the fact that someone removed a package that someone else was still actively maintaining. (it looks like this was unintentional, as someone did attempt to ping all affected maintainers, so I’m not blaming anyone here - mistakes happen)

I think that should be the crux of a deprecation/removal policy: as long as someone has the energy to maintain a package in nixpkgs, I don’t think we should remove it (barring ones with a disproportionate impact on the general infrastructure, but those should be special cases anyway).

This of course gets interesting when, like here, the actively-maintained nixpkgs package depends on a now-unmaintained nixpkgs package. I think in that case we should consider the depending maintainer(s) then become the de-facto maintainer(s) of the dependency as well.

If no changes are needed to the dependency package, no problem. When there start to be issues with the dependency package, which I’m sure was the case with python2, the maintainer should be given some time to resolve the situation. There’s different options: they could get the dependency back in shape. If upstream has abandoned the package, perhaps they could even consider taking up upstream maintainership of the package. Of course that’s not very realistic in the python2 case: there, likely a more feasible solution would be to create a patch or work with upstream to make the depending package work with python3, or remove the python dependency entirely. That could still be considerable work, of course, but then it’s up to the maintainer to either take that on or decide to drop the package after all.

I definitely sympathize with those who lament the state of affairs where constant changes are needed for software to stay working (I fondly remember some of the small X11 tools that ‘just work’ even after years and years without updates). OTOH we can’t expect upstream to keep backwards-compatibility indefinitely, and I think we should use our powers of keeping multiple versions around sparingly - especially when there’s no-one around feeling responsible for those old versions anymore.


I think you’re trying to solve more than one problem here. 1) When should a package be considered for removal/deprecation? 2) How do we deprecate it? The former is a pre-existing problem, and does not need to be solved to solve the latter. If the original PR had instead introduced a non-failing deprecation warning instead of outright removing the package, things would have been better. Solving the issue of determining when to remove/deprecate is a whole other matter.

1 Like

I think the whole problem boils down that a lot of us (including me) are on an unstable channel, which by definition is unstable, and then have higher stability expectation than it provides. Yes, it sucks when your package suddenly disappeared or some options changed or whatever and now how have to spend some time fixing that instead of what you were actually doing initially.

However, I don’t see a good way to improve the situation that does not require a disproportional amount of maintaining. Especially for dropped packages, simply swapping the attribute out with one from the latest release or an older pinned unstable should be an acceptable burden.

For new releases, on the other hand, I think breaking changes are fine. But the communication can be improved. For example, we could automatically generate a release notes section with all attributes that got removed.


There is a difference between things breaking because of an otherwise useful change (like GCC not just breaking some code maliciously but also issuing a warning about it) and just because some people consider that if upstream stops breaking the behaviours of their software in new ways, it’s bad.


the whole purpose of Nixpkgs / Nix makes it that one can have a more tolerant policy for deprecation since the whole system is pinned to a commit.

No one forces you to upgrade, and changes upstream are isolated from your current system :slight_smile:


I think a reasonable policy around this would be great, especially if it comes with a notification reminding users/maintainers of the workarounds (maintaining packages downstream, pulling them from old nixpkgs versions, removing deprecated dependencies, assuming upstream maintenance, …).

I think it could be as simple as having a defined policy for notifying maintainers as was done in this case; just having a playbook and a link to a well-reasoned explanation behind the action ready to go would help even in this case.

Something prepared with a cool head and widespread community acceptance is less likely to cause disgruntling confrontations than individual comments on github.

In lieu of a good solution with lib.warn, maybe even hook it up to a discourse topic, so that people who live on unstable can be made aware of deprecated packages before they’re pulled from under their feet. Only notifying maintainers does seem a little problematic, given how many people use unstable as a rolling release distro. This way people have a way to fix their packages before the inevitable hits - a user may not even realize their package is unmaintained.

Some kind of support in nixpkgs for that would of course be preferable :slight_smile:

1 Like

I really like the idea of a Discourse topic. Something like the “PRs in distress” series, but instead we link to PRs that remove packages. People on unstable that want to get a heads-up can then simply subscribe to it.

There are still details to discuss like whether those PRs should be open for at least X days to give people time to react, and whether all package removals should be posted or only “mass removals”. I think either way that thread would be rather low-traffic, but that’s actually a good thing.

Edit: let’s try this out! Breaking changes announcement for unstable


This is a great idea. We can add a label right now, some automation needs to be written first. But if we post things manually in the meantime this should hopefully work too.

neovim pins an issue to allow to track breaking changes . I doubt it would scale with nixpkgs but if we restrict it enough, could be one solution.

  • If you have (or see) a PR that removes package or has otherwise backwards incompatible changes, link it in here (together with a short sentence about what exactly is affected).

This sounds like something a bot should handle in response to a “breaking-changes” label on a PR.


Package removal is part of the has: clean up label.

Today i’ve upgraded my unstable and I noticed the following:

trace: warning: The option `nix.useSandbox' defined in `/nix/store/xxx-source/configuration.nix' has been renamed to `nix.settings.sandbox'.
trace: warning: The option `nix.trustedUsers' defined in `/nix/store/xxx-source/configuration.nix' has been renamed to `nix.settings.trusted-users'.
trace: warning: The option `nix.maxJobs' defined in `/nix/store/xxx-source/hardware-configuration.nix' has been renamed to `nix.settings.max-jobs'.
trace: warning: The option `nix.binaryCaches' defined in `/nix/store/xxx-source/flake.nix' has been renamed to `nix.settings.substituters'.
trace: warning: The option `nix.binaryCachePublicKeys' defined in `/nix/store/xxx-source/flake.nix' has been renamed to `nix.settings.trusted-public-keys'.

are we transitioning from camelCased attr names to dash-based ones? Is there a rationale somewhere?

1 Like

I guess specifically for nix settings, it matches the options in nix.conf.


@azazel75 wherever you see a settings option it means the key and values within are being passed directly to a configuration file unchanged. See for full details.


Well, my two cents.

We have two cases at least:

  • An isolated leaf package
  • A library, or more generally a thing with many “reverse dependencies” (many packages depend on it)

In the case of a leaf package, it can be kept indefinitely in the tree, provided it does not break with new releases of (say) GCC. Maybe it can be removed if no one cares about it.

In the case of a library with many packages depending on it, its removal imply a huge pruning of Nixpkgs dependency graph. Maybe 6 months or 1 year of grace period for those should suffice.

I would suggest to use NUR for some outsiders from these competitors, however I am afraid of this AUR “second class citizenship” problem.

The ability to make a flake of package, and move it out of nixpkgs into a flake registry…would this lessen the impact of these decisions.

Maybe not with lib’s, but with top level applications?

1 Like

Yes, it certainly facilitates. Indeed we can use NUR for those backward-compatible things already.

1 Like

What does NUR mean? From the context with AUR I guess it must be an abbreviation for “Nix User’s Repository” or so, but what and where is that?