That’s super good to know, thank you very much!
I did answer a common sentiment ITT in the context of answering your question, but I didn’t mean to make it sound like you said it …
But would you agree that there are levels of improvement, that are below a threshold, where it’s worth breaking consumers, and should therefore only be done in a compatible way?
How would you approach defining that threshold, to keep the slope from becoming too slippery i.e. without resorting to the 'ole “I’ll know it when I see it”?
How do you like my definition of “Only break if there is just no way around it, without compromising other functionality”?
I guess I did see your bringing up your involvement as maintainer, as an invitation to discuss your involvement as a maintainer. But I can also see now how my wording may have been too close for comfort, so my apologies for that.
I would ask you to also refrain from wordings like “You worry so much about …”, because that makes it easier for me as well, to not “go there”.
I think what I’m trying to say is: If you must hypothetically quit because of a hypothetical community commitment to stability, then you’d hypothetically have to do what you’d had to do, it’s all volunteer work after all. I’d hypothetically much rather keep you involved, though, or hypothetically even increase your involvement, so …
… I’d like to find out, what you’d need for that, when faced with an increased (or even absolute) requirement to keep your packages and modules stable, as a maintainer.
I didn’t answer the argument, you’re referring to, because I’m still thinking it’s asked and answered. But let me reiterate the points where you put question marks:
Depends on what the fix looks like. If it means replacing module or package names with incompatible successors, then yes. Make a new thing.
That the new thing has all the fixes and works better and users will love it, especially when you warn them about the possibility/necessity to upgrade, while they can rely on the old thing remaining there.
With nixos, this is possible. That’s the huge innovation. That’s the reason, we’re patching RPATHs. Worst case will always remain that it needs to fully instanciate an ancient version in a container, from git history, with the closure bloat, that involves, which could just be another warning. That could be the final resting place, where old modules and packages go to die. An overlay that knows about the final commit, where something still works. But that’s just one possibility to handle this.
Whatever we do, I think we should fully commit to a deprecation/warning cycle of at least one release, hopefully two for important functionality.
Yes, that means that introducing attrsOf submodule
in place of a what used to be named options will not be acceptable.
No, a maintainer would maintain old stuff indefinitely, because it’s policy.
Also, as long as nobody breaks the old stuff, it’s basically free.
Yes, we would need support structures, for maintainers to say: “I don’t want to deal with another gcc update” and just kick something out of their maintenance responsibility.
But maybe instead of deleting the thing, there is a standard method to pin its inputs to various degrees (up to and including a historic nixpkgs revision) and anybody needing the old thing would be expected to keep up with the required emulation degree / closure sizes.
BTW, please also note how we can even only start having this conversation without having to bring up VMs, because we can 99.9% rely on the linux kernel not breaking us.
Can I ask you the same question, I asked @waffle8946: How do you like my definition of “Only break if there is just no way around it, without compromising other functionality”?
I mean, point taken. I’m offering an extreme and contrarian viewpoint here. But we have a zero-tolerance policy on various forms of communication within the community, so why not explore a zero-tolerance policy for breakage as well?
How do you feel about my concession to “break when absolutely necessary”?