I’m not convinced that write access alone encompasses the degree of autonomy and empowerment, that is needed to have a positive impact on contributor retention: You’re still sitting in a very large boat. It would be unfeasible (or at least not cheap) to experiment with workflows, social dynamics and rules of engagement all by yourself or with local consensus throughout immediate peers.
When I was most active, Nixpkgs was new and interesting. I was intrinsically motivated to learn and discover how to use nix to package and use software. But as I “mastered” it more, it became less intriguing and more ordinary. Since nixpkgs is a voluntary opensource community, you “give up” your free time to be able to contribute to it. Right now, I mostly want to do “what’s needed” for the release, but not much more
I fully agree with that! From both my own experience in the past ~4 years and from what I know from other contributors. Occasionally I’m losing my motivation and just do less / only stuff related to packages I care about and then there are times where I’m motivated to do more.
And of course, constraints like a job or being a student (both apply to me for instance) are also relevant.
macOS support is often lagging behind, because we do not have a lot of maintainers on macOS, so it’s great that @domenkozar and others raised money, so that someone can be paid now to improve macOS support
I think that this was a nice idea. It is something to be answered by somebody who has more insights about the NixOS foundation, but I’m wondering if we have sufficient budget at the moment to actually pay somebody for maintenance work (follow-up question is how to decide who to pick).
However I fail to understand how these issues can be solved by ripping apart our package set.
First of all I’d like to know whether we’re talking about a subtree-like approach as it’s the case in e.g. the Linux kernel where I think it could work (though I don’t know for sure - I’m not a kernel hacker) or about just many independent repositories (e.g. for subsystems such as python packages, haskell packages etc), managed through e.g. flakes. This would be something I’m pretty much against it for the following reasons:
First of all, we need python3Minimal to build glibc since version 2.30 (IIRC), so it’s actually something needed to bootstrap stdenv, so we’ll either have circular dependencies or two python expressions. While this is certainly doable, I don’t think it will make the responsible maintainers happier. This is just the the first example that came to my mind, I just want to make it clear that the package set isn’t really a tree.
Due to Nix’s purely functional model (and only an experimental CAS implementation), changes in core packages such as openssl, glibc and friends will basically cause a full rebuild. This is currently manageable (from my PoV at least) since we have a few people doing a decent job at maintaining the staging workflow. However I don’t see how this will work with packages distributed over tons of let’s say flakes:
As soon as a core package changes in a breaking way, we usually create a branch from staging and fix the majority of failing stuff there (minor things can be fixed in e.g. ZHF iterations). Since there’s only one jobset to watch and one branch to work at, it’s a lot of work, but doable (done that myself or helped out during such changes).
As soon as we have to make a lot of changes like this, one would have to contribute to tons of different repositories, keeping probably even more dependencies (in the worst case weird circular ones) in mind. Tbh, this would be something I’d refuse to do.
Also, some of you may know that people tend to run into issues when mixing up different glibc versions due to ABI incompatibilities. Right now, we can keep the issue somewhat under control, but I doubt that this will be possible if we have to do this for multiple repositories. In the worst case, endusers will have to solve these issues on their own and tbh, I wouldn’t take any distribution seriously if I had to do this.
Also, what about security updates? I’m aware that the current situation isn’t optimal, but what will happen if we have to make sure that every subsystem takes care of these? Usually rebuilds are needed to fix let’s say openssl CVEs. Right now it’s possible to push a patch to staging-next (or even master), wait for a rebuild & channel bump and we’re done. If I have let’s say a matrix-synapse subsystem, I’ll have to wait for at least the stdenv subsystem and the python subsystem (this scenario is e.g. relevant for a “bad” glibc CVE).
Disclaimer: I don’t have such a big problem with removing niche packages that are probably used by two people, my problem is that if we stop being a software distribution, I’m afraid that we’ll run into the problems I described above. Don’t get me wrong, I’m happy to be proven wrong
I’m glad we’re talking about these kinds of things, as they shouldn’t be taboo and there are no right/wrong answers.
The reason why it’s hard to talk about it though is that each person and situation is unique, consisting of many bits that contribute to the decision.
There are bits that as a community we can (partially?) control and a lot that we can’t.
Since Nix has grown from the early enthusiasts’ stage, we’re getting more people that see it as a getting things done tool. That will also make some people feel threatened in some way, although I’d argue that’s an illusion due to less control (more people, more policy).
I feel like one of the main things we’re missing is shaping our community so that structure is more explicit and clear, so that decisions can be made instead of resorting to lack of authority.
I have some ideas on this topic, but nothing that I’d like to discuss yet since I haven’t even shaped my opinion yet.
It has always been a getting things done tool. 12 years ago I just wanted something that is not fragile in corner-case situations like apt/yum are, and is a bit less crazy than that unionfs mess I had. Nix surely fit the bill, and it did not and still does not add that much accidental complexity compared to just building the package I know how to build.
The question is just that the «things» are diverging and sometimes come into (almost) direct conflict. «I want total human work for a version bump in Nixpkgs to be <10 minutes when stuff just works» and «I want every change in Nixpkgs to be seen by at least three people» are probably both kind of achievable, and probably there are people that would like either of the goals, and also achieveing either of the goals might make Nixpkgs completely useless to those who want the opposite one.
Given how people defined conflict of interest in RFC SC rotation — which is partially unavoidable, but doesn’t make that much more sense — I would not even be surprised if we fall into «everyone cares about 5% of features» trap if we suddenly gain ability to make decisions. This makes me unsure whether constant indecisiveness is actually safer for the project…
That is true, but also means that you have to be privileged to be able to contribute.
Some (core) contributors are hired by a company that uses Nix internally. That mean that they are biased and work in the interest of their employer (at least at work time) and that might not be the interest of the broad community. It would be great if they would be employed by the NixOS Foundation.
My vision for free software is that it is not something people do in their free time while having to develop proprietary software for a living, but a community effort where core contributors get payed by monthly donations from the users. That way we can out-compete proprietary software.
That is mostly a coordination problem of collective action also known as “snowdrift dilemma”.
Snowdrift.coop is working to solve that. I see NixOS as a good candidate for when they launch and accept external projects.
But realistically, the aim I describe means dropping approximately all quality-control requirements on tests, and no actual review of auto-updates at all if enough tests pass and I dunno, it is not in the news as hugely broken 24 hours later.
(I do actually support this approach for the branch I would find most useful to myself, and it looks like that wouldn’t be a branch people use for large deployments)
I can’t speak with absolute proriety, not even close. And it is intimidating to challange you on things that I do not understand anything about. As it might be intimidation to charge the argument into the field, that I can speak with proriety and in which I argue, here. Never the less, I’m going to try and challenge you on an abstraction level that seems ± doablee for me.
Technical aspects are important. Though, in order to shape the fundamental (social) dynamics of the community, like for example contributor retenion, rate of evolution, ecosystem growth, all those kind of things, we can not argue technically and argue from first principles at the same time.
Of course, breaking things into a Federate Republic of Nixpkgs requires preparation of a couple of things to be technically viable, and of course some of those things might seem like overhead at first.
From the top of my mind I can think of the need for better interfaces, which also includes versioning schemes and strict adherence to inter-statal contracts in our Federate Republic.
Stricter adherence to clear, documented and versioned contracts in turn can provide all for itself, and over time, the stabilization and optimization of currently hidden or buried internal interfaces.
Let this be just a superficial example of how such move changes peoples requirement (towards each other) and naturally, we’ll be all able to figure out how to draw that owl in step two. I’d say, it’ll be an owl with better overall proportions.
On the other hand, it is probably pointless to expect that we would be able to know beforehand how that owl would look like in the end in all details.
From all I know about Tesla / Space X and similar crazy undertakings, this is a problem of change management and leadership, not one of technical constraints.
So I’m bound to what I know and master and have to ask you to teach me what is it that is technically necessary in order to break up NixOS/nixpkgs into a Federate Republic of Nixpkgs along roughly the lines of language boundaries.
For once, let’s include the assumption about shared infrastructure, as exists today with hydra etc, to hold true, but as a bazaar model: take home what you find useful for your little state/kingdom and have the freedom to experiment individually, if you wish.
Once we have a clearer view of those requirements, I might be put into a position where I can better explain a pathway to a Federal Republic of Nixpkgs even to a person who’s priorities are on the technical side of the equation aquired from the current mental model of a single source tree.
That would fabolous, because I would be able to reconciliate the mental dissonances that my (I insist:) very valid suggestions cause here. It would make me an overall more effective and helpful community member in quest for a better ecosystem.
Let this be just a superficial example of how such move changes peoples requirement (towards each other)
From all I know about Tesla / Space X and similar crazy undertakings, this is a problem of change management and leadership, not one of technical constraints.
Well, the resource model of SpaceX is vastly different, and no, this is not irrelevant to leadership structure.
I’m sure similar thoughts have been discussed before but couldn’t the “splitting nixpkgs to lower committer overhead” problems be mitigated by breaking it into official team based sub repositories with their own commit rights policies etc via flakes? It also solves the “prestige” problem discussed regarding a NUR like repo, since packages could be in NixOS/gnome NixOS/blockchain etc. Then NixOS/nixpkgs would just be a meta repo that points to versioned commits of the team/project repos. The whole operation could work on trust without handing everyone the keys to the kingdom (commit rights to nixpkgs), and since a nixpkgs committer would just have to sign off on the PR to version bump the flake lock of the subrepo the review burden would be lessened.
I guess the main downside would be that it would be difficult to determine what silo certain packages should be in, and the dependency bloat problem of “NixOS/gnome points to gtk version x.2, but the entire NixOS/Science repo is still locked on x.1”. It might be a bit meta-circular with these repos crossreferencing versions of eachother, but wouldnt bloat on unstable installs be preferable to the current “We either have to fix packages all over the place or mark 350 gtk packages as broken”?
I guess the main downside would be that it would be difficult to determine what silo certain packages should be in, and the dependency bloat problem of “NixOS/gnome points to gtk version x.2, but the entire NixOS/Science repo is still locked on x.1”. It might be a bit meta-circular with these repos crossreferencing versions of eachother, but wouldnt bloat on unstable installs be preferable to the current “We either have to fix packages all over the place or mark 350 gtk packages as broken”?
We have always had an option to keep both versions and push the migration step by step. There has long been a bit too much opposition to just do it more often. I do not like the implication that the best way to agree to accept this is to make it a hidden mess… It might be true, of course.
The difference I can see between “nixpkgs carries 5 versions of gtk” and the “NixOS/science repo is pointing to an old revision of NixOS/gnome” situations is that option B let’s the gnome team just maintain the newest version available without breaking things for everyone else.
Put another way, it pushes the burden of legacy out towards the leaf packages instead of converging on core packages
The difference I can see between “nixpkgs carries 5 versions of gtk” and the “NixOS/science repo is pointing to an old revision of NixOS/gnome” situations is that option B let’s the gnome team just maintain the newest version available without breaking things for everyone else.
Well, if the previous versions are there for transition smoothing and are explicitly deprecated and there are no upstream updates, the old versions just simply sit there without being maintained anyway. Which is fine for this purpose.
People have posted some very interesting thoughts and arguments here, I want to highlight a few to help form my own position.
Absolutely! Everyone, should you decide to leave, please pick someone to do your exit-interview with and post it here.
There is some evidence this is generally true of public projects. There is this classic analysis of Wikipedia author activity by Aaron Swartz (2006), but I remember having seen multiple studies supporting the notion.
What would you suggest to keep things going smoothly?
You totally nailed what I was subconsciously noticing for some time. Next to conceptual superiority this might be part of the secret sauce, and why I boldly predict that Nix and its ecosystem will continue to grow to the level of relevance of today’s Linux - despite all the historic cruft and the train wreck of a user interface. I still have hope for Nix not to become consultantware, and highly appreciate the work that goes into making it more accessible (such as options/package search, documentation, refactoring, wrappers, alternative user interfaces).
Strongly agree. I wouldn’t say that changes will get merged quickly, but everything considered it goes reasonably well.
I’ve read the idea multiple times now. The question is how this would empower maintainers, lower barriers to entry or motivate people to engage more.
I also think that more existing tests would relieve maintainers, since contributors could more immediately see if what they did actually works without performing any extra steps. Tangential: Can anyone explain to me why nixpkgs-review is not part of the default checks on GitHub?
Thank you for the hint. Would be great to see progress on this issue.
Funding: By now I’m convinced that bounties and especially recurrent funding would help with core tasks that are too laborious or unrewarding for the majority of volunteers. See above for community structure, see below for prior funding efforts.
Testing: We could clearly communicate that adding tests will improve turnaround on your pull requests. Finding the right place for that message would be the first step. Contributor guidelines? I doubt it would help to introduce a standard or formal process if there is not even an ad-hoc process.
I’m all for with nudging, but I have concerns with requirements (although I’m not against them), because they raise entry barriers. Given the above assumption on the volume of one-time or drive-by contributions and their significant value to the community (both in terms of content as well as the potential for newcomers to become regulars), we might want to keep friction as low as possible.
Where was attitude expressed that we don’t need tests? In context of this topic the question stands how adding more tests, or adding nudges, requirements or incentives to add tests, would impact contributor retention. From the comments it sounds like more automation is expected to reduce workload on maintainers and to tighten the loop for contributors, which are good things. There is a possible downside with putting up additional up-front requirements that either the review process becomes longer or the psychological threshold for even starting out becomes so high that it precludes potential contributions. Reviewing new tests also costs time.
How can we even measure how much agreement we have about testing?
What do you think can we do about this? I’m convinced this aspect of change management is highly important. How could we strike a balance between trying to make people write better commit messages and scaring them off with too much (what they may perceive as) overhead?
What do you think can we do to reduce the amount of (accumulated) breakage?
Yes!
Same here.
I would absolutely not want to see prompts for donations on every occasion. If there is something people want to happen, they can do it themselves or pay money. It just should be made trivial to find out that the possibility exists and how to do either.
My provisional conclusion is that more paid grunt work is desirable, and it should somehow be directed by the community.
I get it, it’s a fine line between effective advertisement and spam.
It can be done in a way that recognizes peoples freedoms and respects their choice.
I would suggest for everyone interested to read about Nudge theory - Wikipedia
and Libertarian paternalism - Wikipedia
People often need to see advertisement of options that are in their best interests, otherwise we tend to overlook those options
Nix/OS seems to have a pretty good testing framework for packages. May i suggest, the nix summer of tests!.
Or even a physical in person… you ever heard of a ‘test’ sprint now things seems to be opening up a little.
A lot of ‘work’ round here seems to be drudge work, nix needs janitors… not everyone is prepared to do that, the amazing developers I’ve worked along side always want to do the most interesting work, the best ones don’t really like and/or have real trouble writing excellent documentation that mere mortals can understand. Writing something and then ‘explaining something’ are very very different disciplines. Seldom do they know how to write interstella unit tests. Full Integration testing is usually left to someone else. It’s a reason why most large software projects have a dedicated ‘testing’ team.
Having spent quite a bit of time ‘testing’ software, i’ve got a pretty good handle of this stuff, and not only the technical, but how soft bits works too (humans!)
Because Nix/OS changes the fundamental way unix works, in fact i’d call it ‘unix compatible’, with high degree of patching, non-FHS etc etc etc… then i think testing is probably more crucial here then say other ‘distros’…where upstream usually have there own tests!. But remember, if you don’t test anything, nothing ever breaks, so you don’t have to fix anything! :-)… LOL
People have reasons for hanging around this project… contributions can bolster one’s CV , so that people can get a better paid job not doing drudge work. Avoiding being a code janitor, so they do not have to deal having a low paid ‘real’ janitor job, instead getting paid the big bucks to something the actually love and like to do.
Personally i like janitor type work… because it’s that keeps everything running, a mix between Hong Kong Fooey and Inspector Gadgets dog. It’s not glamorous, but someones gotta do it.
There seem to be quite a lot of people who are donating to NixOS through OpenCollective. Although there is a public ledger and I am sure that the NixOS Foundation spends the money well, it is not clear at the moment of donating how money is used.
For example, I can understand that macOS users want to see improved macOS support. The NixOS Foundation spent a bunch of money to buy Mac Minis for Hydra for M1 support. But personally, I do not see why I should spend time and money on improving macOS support. Apple is one of the richest companies in the world, why should I help them? I don’t even use a Mac anymore. [1]
I think asking for donations would be much more effective if there were ‘rallies’ around specific goals and funding targets. It makes it much more concrete what your money is used for and makes the results more tangible. Examples of goals could be Purchase a 64 core AMD Threadripper machine with 128GB RAM for running nixpkgs-review on PRs, Purchase 4 Apple M2 Mac Minis for Hydra, Employ someone for N hours per week for triaging CVEs. Obviously, there shouldn’t be too many goals, or donations get spread too thinly. But I would find such goals much more exciting and I would probably donate more if I knew my money would go to a Threadripper for building PRs or someone who triages CVEs, than generically donating to the NixOS foundation and waiting what happens next.
[1] Note, my point is not ‘Apple sucks, we shouldn’t improve Nix on macOS’, it’s that I don’t want to spend my time or money on it. I fully understand if others do.