Nixpkgs's current development workflow is not sustainable

Hello! I’m a bit late to the party…

I, too, have been trying to participate in maintaining CUDA packages lately. In fact, they are almost exactly the part of nixpkgs, that I’d like to discuss here, and the part that (this time) brought to the surface many of the pain points that @samuela mentions. At risk of going off-topic, I will try to fill in some details.

The context is that nixpkgs packages a lot of complex “scientific computing” software that is, for practical purposes, most commonly deployed with unfree dependencies like CUDA (think jax, pytorch or… blender). In fact, with a bit of work Nix appears to be a pretty good fit for deploying all of that software. All of the same packages are as well available through other means of distribution, like python’s PyPi, conda, or mainstream distributions’ repositories. Most of the time they will “just work”, you’ll be getting tested pre-built up-to-date packages. Except they break. For all sorts of reason. One python package overwrites files from another python package and all of a sudden faiss cannot see the GPU anymore. Fundamentally, with Nix and nixpkgs one could implement everything that these other building-packaging-distribution systems do, but have more control and predictability. Have fewer breaks.

That is theory. The practice is that nixpkgs, for known reasons, hasn’t got continuous integration running for CUDA-enabled software. The implication is that even configuration and build failures go about invisible for maintainers, leave alone integration failures, or failures in tests involving hardware acceleration. This also means eventual rot in any chunks of nix code that touch CUDA. It probably wouldn’t be too far from the truth to say that the occasionally partially-working state of these packages (their CUDA-enabled versions, that is) has been largely maintained through unsystematic pull requests from interested individuals, looked after by maintainers of adjacent parts of nixpkgs.

One attempt to address this situation or, rather, an ongoing exploration of possible ways to address it was the introduction of @NixOS/cuda-maintainers, called for by @samuela. It is my impression so far that this has been an improvement:

  • this introduced (somewhat) explicit responsibility for previously un-owned parts of nixpkgs;
  • we started running builds (cf. this) and caching results (cf. cuda-maintainers.cachix.org and many thanks to @domenkozar!) for the unfree sci-comp packages on a regular basis, which means that related regressions are not invisible anymore;
  • in parallel, @samuela is running a collection of crafted integration tests, in a CI that tries, on schedule, to notify authors about merged commits that introduce regressions, cf. nixpkgs-upkeep;
  • three previous items made it safe enough to start slowly pruning the outdated hacks and patches in these expressions, and even to perform substantial changes to how CUDA code is organized overall (notably, the introduction of the cudaPackages with support for overrideScope'; many thanks to @FRidh!)
  • working in-tree also has the additional benefit that the many small fixes, extensions, and adjustments that people do in overlays can start migrating upstream, and might even reflect in how downstream packagers handle cuda dependencies when they first introduce them

Essentially, this is precisely the kind of initiative, that @7c6f434c suggests: we target a well-scoped part of nixpkgs, and try to shift the status quo from “this is consistently broken” to “mostly works and has users”.

Obviously, there are many limitations. First of all, we don’t have any dedicated hardware for running those builds and tests, which means our current workflow simply isn’t sustainable: it exists as long as we are personally involved. Lack of dedicated hardware also implies that it’s simply infeasible for us to build staging (we’ve tried). In turn, this implies that we can only address regressions after the fact, when they have already reached master - one of @samuela’s concerns. This gets, however, worse: there’s no feedback between our fragile constantly-changing hand-crafted CI and the CI of nixpkgs. Thus when regressions have reached master, there’s nothing stopping them from flowing further into nixos-unstable and nixpkgs-unstable! Unless the regressions also affect some of the selected free packages, they’ll be automatically merged into the unstable branches. This problem is not hypothetical: for example many of regressions caused by gcc bump took longer to address for cuda-enabled packages, than for the free packages.

One conclusion is that our original implicit goal of keeping the unstable branches “mostly green” was simply naive and wrong, and we can’t but choose a different policy. A different policy both for maintaining and for consuming these unfree packages: I’ve tried for a while (months) to stay at the release branch. I had to update to nixpkgs-unstable because the release had too many things broken, that we’ve already fixed in master, but even regardless: the release branch has (for our purposes) too low a frequency, missing packages, missing updates. My understanding is that many people (@samuela included) treat nixpkgs-unstable as a rolling release branch. From the discussion in this thread, this interpretation appears to be not entirely correct, but maybe it’s what we need. One alternative workflow we’ve considered (but haven’t discussed in depth) for cuda-packages specifically is maintaining and advertising our own “rolling release” branch, that we would merge things into only after checking against our own (unfree-aware and focused) CI. Perhaps this is also where merge-trains could be used to save some compute.

The complexity of building staging and even master (turns out that when you import with config.cudaSupport = true or override blas and lapack you trigger the whole lot of rebuilds) begs further: do we have to build that much?

There might be a lot of fat to cut. One thing we’ve discovered from the inherited code-base, for example, is that the old runfile-based cudatoolkit expression, whose NAR is slightly above 4GiB of mass, has a dependency on fontconfig and alsa-lib, among other surprising things. The new split cudaPackages don’t have this artifact, but the migration is still in process and we do depend on the old cudatoolkit. That’s very significant. This means that everytime fontconfig, or alsa-lib, or unixODBC, or gtk2, or the-list-goes-on is updated: we have to pump in additional 4GiB into the nix store, we have to rebuild cudnn and magma, we have to rebuild pytorch and tensorflow, we have to rebuild jaxlib, all of which fight in the super heavyweight!

The issue is way more systematic, however, than just cuda updating too often. These same behemoth packages, pytorch and tensorflow, through a few levels of indirection have such comparatively small and high-frequency dependencies as pillow (an image-loading package for python that’s only used by some utils modules at runtime), and pyyaml, and many more. Many of these are in propagatedBuildInputs, some are checkInputs, but most of them are never ever used in buildPhase and cannot possibly affect the output. What they do is they signal a false-positive and cause a very expensive rebuild. On schedule. Is this behaviour inherent and unavoidable for any package set written in Nix? Obviously not

I suspect one could, if desired, write “a” nixpkgs that would literally be archlinux, and rebuild just as often (which is probably rather infrequent, compared to “the” nixpkgs), just by introducing enough boundaries and indirections. Split phases in most packages. Not that this would be useful per se, it’s interesting as the opposite extreme, the other end of the spectrum than what current nixpkgs is. And maybe we should head toward somewhere inbetween: rebuild build results could have actually changed, test when test results could.

This brings me to the “nixpkgs is too large part”. I guess it’s pretty large. The actual problem could be, however, that too many things depend on too many things. I’m new to Nix, and just a year ago I had so many more doubts and questions about decisions made in nixpkgs, not least of them the choice of monorepo. Now I actually began to respect the current structure: it might be really boosting synchronization between so many independent teams and individual contributors, helping the eventual consistency. When a change happens, the signal propagates to all affected much sooner, than it probably ever could with subsystems in separate flakes. I don’t think we need to change this. Keeping the “signal” metaphor, I think we need to prune and hone the centralized nixpkgs that we have so as to reduce the “noise” like these false-positives about maybe-changing outputs. We need better compartmentalization in the sense of how many links there are between packages (that’s common sense), but that does not necessitate splitting nixpkgs into multiple repositories

I like the idea about automated broken markers (they save compute, and they spread the signal that a change is needed). I also like the idea of integrating our CI into the status checks. Obviously, for that to happen the unfree CI must be stabilized first, and we must find a sustainable (and scalable) source of storage and compute for it. I don’t think it impossible at all that build failures even for unfree packages might be integrated as merge blockers in future. In fact, I think it inevitably must happen as Nix and nixpkgs grow, and new users come with the expectations they have from other distributions: these parts of nixpkgs will see demand, and maintaining them “in the blind” is impossible. It’s not happening overnight of course. As pointed out by others, there are bureaucracy and trust issues, there are even pure technical difficulties: even if we came up with an agreeable solution, it’s just a lot of work to build a separate CI that wouldn’t contaminate what’s expected to be “free”, and integrate that with existing automated workflows.

14 Likes

And this is I think the real issue: expectations that master and nix*-unstable are rolling releases that should always be green. Having every package passing always is utterly impossible. There are incompatibilities that either

  • can be resolved easily by whoever made the change impacting the package now breaking;
  • can be resolved by the maintainer, but this generally speaking takes time if it would happen at all. We can’t hold back all updates of non-leaf packages.
  • cannot be resolved without significant effort, either with or without upstream.

In that sense, I think users should know what packages matter to them, and have their “own channels”. Following nix*-unstable works fine for core packages, but there is indeed a good chance that a package or two you like does not work. Then, it is better I think to have your own channel, and preferably not one, but where you split up your dependencies in groups. E.g., your NixOS core system might follow nixos-unstable just fine, but your Python environment with scientific computing packages will follow another channel, independently. This users can set up with CI themselves, and is even easier with flakes. What could be nice to have is perhaps a better service for this than say having to set up GH Actions and optionally a binary cache.

In case of Python we should do the wheel building in a separate derivation. That could save a lot of work for larger builds, and especially when we go to content-addressed derivations it gets even more interesting as it could result into not having to make any further rebuilds.

12 Likes

not really?
For me two channel had always lead to big and uncontrollable issues and switching to state issues → replacement of the whole system as it is utterly impossible to be debugged and fixed …
I my experience that only works for small and limited environments (but even there - not over time).

(my Desktop is not small and standard size nor are my Data Science environments - it depends e.g. via GPU and spyder IDS, jupyter … on graphics/UI )


I would be happy to see any kind of change and or progress in the python-nixos-eco-system (after year of resignation)

2 Likes

If each and every user is expected to maintain their own channel/overlay/flake, what is the point of having a centralized package repository? If we’re going to pass the maintenance off to users then it doesn’t seem like we’re fit to do the maintenance in the first place.

6 Likes

There can me more community-maintained channels, which is a relatively cheap method. E.g., a scientific Python channel that advances when the subset of scientific computing packages pass.

8 Likes

fix python to allow multiple versions of dependencies
Allowing Multiple Versions of Python Package in PYTHONPATH/nixpkgs

then expensive packages (tensorflow …) can pin all their dependencies (“bottom-up pinning”, inversion of control), and we avoid rebuilds

then nix can get closer to its promise

the current python situation (collisions in site-packages/) is like the FHS situation (collisions in usr/) that nix wants to solve

… and we avoid rebuilds

the other strategy is to make builds cheaper, for example with Incremental builds
but it requires more complexity (normalize sources, store objects, patch objects)

2 Likes

It’s very nice to read about the CUDA specifics!

In the vain of

It would be great to:

  • Find a way to build these packages with hydra.nixos.org! It would be nice someone in this community could find someone at NVidia to get the licensing (is that the only issue?) sorted out. Or do we need donated hardware? For all we know, NVidia could itself benefit from some Nix CI :O.

  • See if some OpenCL / Vulkan champion wants to up our free software alternatives? I feel like Nixpkgs could also be a good venue for putting all the myriad pieces together to allow non-CUDA GPGPU to work well. Ideally we get an upstream OpenCL / Vulkan interested in Nixpkgs precisely because it is the best way to put all the pieces together.

4 Likes

I think this is a great idea! I’m not sure what the hold up would be other than getting hydra to build unfree packages. We don’t even need to cache them for now! Simply building them would be immensely useful for CI. What’s necessary in order to get hydra to build unfree packages?

There’s no need for special hardware to build CUDA-enabled software, but a GPU is necessary for running GPU-enabled tests. Presently, there are no GPU-enabled tests anywhere in nixpkgs AFAIK. (The Nix build environment blocks access to the GPU by default.) I created a preliminary set of tests here: GitHub - samuela/cuda-nix-testsuite but it’s still early days and I think we’re still working on figuring out what the best solution is for tests requiring a GPU.

5 Likes

What’s necessary in order to get hydra to build unfree packages?
@samuela

The idealistic answer is: a separate hydra. To build the old cudatoolkit, one essentially has to curl | sh a runfile, which is a threat. To build pytorchWithCuda, one has to use nvcc in buildPhase, and later run the checkPhase that would eventually import torch: all of which invoke code originating in a blackbox binary.

E.g., your NixOS core system might follow nixos-unstable just fine, but your Python environment with scientific computing packages will follow another channel, independently
@FRidh

I think the idea of maintaining channels that target specific user-groups is worth a shot in some of its possible forms. I doubt, however, that we should encourage users to simultaneously follow multiple channels for different subsystems, if that’s what you’re suggesting: we would loose that benefit of having pieces in sync, for which we pay with centralization. The question “which nixpkgs and overlays are you using?” would turn into “which core, which python, which haskell, and which texlive have you pinned?” :exploding_head:. Having single source of truth is nice both for maintainers and for users, albeit expensive

7 Likes

I am really looking forward to that and I am hyped. If you need help testing or brainstorming please hit us up on matrix.

11 Likes

What if we can have our cake and eat it too. Or in this case; modularize nixpkgs without actually splitting up nixpkgs.

I do reinforcement learning research, and a key lesson is when two agents/actors are evolving simultaneously; neither agent can learn well. As agent#1 learns something, agent#2 “evolves” and breaks/invalidates what agent#1 just learned. But if they take turns, they can actually learn quickly and converge.

We’ve got that same situation here:

  • nixpkgs unstable is changing; breaking torch (or whatever package)
  • and torch is changing; breaking stuff in the latest nixpkgs unstable

Even if we cleverly reduce the number of inter-dependencies, it’s not going to categorically change this problem. Upstream/downstream dependencies are branching; O(b^n) even if we change b or n, it’s still an exponential equation and the inputs get bigger every year.

Goals

I can see your point now (@many-people) about how both a monorepo and fewer breakages are important.

We want:

  1. intercompatible (all green) packages
  2. maintainable, localized, latest, updates

Fundamental limitations

Nixpkgs unstable (or staging) can not be both all-green AND be the source of the latest versions of everything. There’s not enough compute power in the world to recursively run all the tests on every version bump. So let nixpkgs handle our #1 (intercompatiblity) and not worry about having the latest of literally everything.

On the flip side (the title of this thread) package maintainers can not update their package every time a dependency changes. There’s not enough developer free time in the world to check every upstream change and every downstream consequence. So let individual packages handle our #2 desire; staying up to date and maintainable and not worry about upstream changes, or downstream consequences.

Eating & Having Cake

So let’s consider this; a mono repo and multi repo, treating them like the two-actor coordination problem at the begining.

The multi-repo:

  • Assumption: Let’s say the torch (or whatever) maintainer treats nixpkgs as frozen by pinning to a specific nixpkg commit (like the 21.11 release).
  • Updates: Now that nixpkgs is not changing, it becomes realistic for a maintainer to attempt getting new versions of torch working. The ground is no longer collapsing from under them.
  • Flexibility: However, whenever there is a problem; like if torch needs a new version of GCC, the maintainer has the flexibility to unpin and move up and down the nixpkg timeline (like pinning to the 22.05-pre release) to make this version of torch work.
  • Testing: The torch maintainer effectively runs unit tests; only testing torch, without worrying about downstream breakages or the upstream daily nixpkg-unstable changes.

The mono-repo:

  • Assumption: On the flip side, nixpkgs can assume that torch is stable by pinning to a version of torch (like the 1.11.0 release), and using overlays to make torch use the latest nixpkgs. This could, but doesn’t have to be done with submodules (submodules could protect against breakages from a multi-repo suddenly changing its url)
  • Updates: Just like the multi-repo, the “torch is stable” assumption let’s nixpkgs start upgrading itself without the ground turning into liquid. Instead of packages updating randomly, imagine a Nixpkg upgrade as wave. It begins with bottom/foundational-packages like glibc or openssl. We update glibc, if nothing breaks then create a git tag “wave-1.1” . Then update cmake, if that doesn’t break anything downstream, we finalize a “wave-1.2” tag. Once a tag is finalized, it indicates an all-green set of packages. Eventually it’s torch’s turn to be updated and checked as part of wave-1. But only after torch’s dependencies are all-green.
  • Flexibility: Just like the multi-repo, once there is a problem, like glibc breaking downstream stuff, nixpkgs has the flexibility to pick any commit from a package timeline. So if glibc breaks torch, maybe the torch repo already has a fix waiting (torch has been updating itself independently). Ask torch for a “wave-1.1” version, or try the latest stable torch release. If torch is still broken, keep running tests, and file an issue/PR on the torch repo requesting “wave-1.1” support. Only once torch, and the other downstream stuff is fixed, can the wave-1.1 (glibc) be finalized.
  • Testing: Even if a minor wave is held-up by a broken package like torch, that doesn’t mean the next major wave can’t start. Wave-2.1 starts as soon as a new glibc or other foundation package is available. All waves are progressively becoming “more green” until they’re finished. A mature wave, even if unfinished, would be pretty stable in theory. E.g. we can each customize how much stability to sacrifice in exchange for cutting-edge-ness. The monorepo acts like one giant integration test that takes months to complete; because an unfinished wave necessarily means something is either broken or untested (unstable). Waves (minor number) could be as small as a single gcc update, assuming that the update breaks literally 0 downstream packages.

This can also be done in hierarchy, with each python package having a mini-repo, and pythonPackages being a monorepo, with nixpkgs using pythonPackages instead of individual packages.

Result

We could get bleeding-edge versions of any individual package, since the package would be using an all-green or mostly-green foundation (pretty stable for bleeding edge). The caveat is if you need multiple bleeding edge packages; they might not play nice together. But if you need everything to play nice together (and don’t want to fix bleeding edge packages yourself) then the only solution is to use the most recent all-green set of packages (which inherently takes a long time to curate and won’t be bleeding-edge).

We can’t magically keep it all green and all bleeding-edge.

15 Likes

I just read your pretty lengthy post 3 times and I still don’t understand it.

My conclusion is that you are describing hydra with notifications and an easier way to move along the timeline for each package.

Thats not what we want to do. We could pick between different releases but tracking an unstable commit for a repository that has 45000 commits and the first page of commits is at most one day old sounds scary and can probably only go south.

Please don’t. Building torch is expensive and blindly trying 25 commits a day to hope that some problem is fixed is not efficient.


We don’t need submodules. flakes, overlays and fetchers can do everything git can do with submodules but better integrated into nix.


They still couldn’t be imported in the same process but if I can run two python scripts which different dependencies in the same profile without interfering with each other even if one is execed from the other it would be great.


I am not sure if hydra has the same problems as me but every time I download CUDA the download times out and fails even on a Gigabit connection. Are we maybe missing a EU mirror? Why is the downloaded blob even over 1 GB?

4 Likes

Okay, I edited the previous post to try to address your points better. Maybe I’ll be able to make a diagram that’s both short and more clear.

Oh I 100% agree. I didn’t mean it quite that literally/strictly.

In practice I imagine a standard tag would be used; like nixpkg looks for a nix-wave-6 tag on the torch repo.
exists? => test it
broken or not-exists? => file an issue.
Done: O(1) per package.
It would be the torch maintainer (as a member of nixpkgs) who could unpin torch and try different commits/releases. But having a nix-wave-6 tag would probably be a better way to achieve the same outcome. The torch maintainer could even just say “yeah there’s no version of pytorch that works with this set of upstream dependencies” and the wave would just permanently mark pytorch as broken on that wave.

The conceptual point was; nixpkgs can freeze to any specific commit of torch, without freezing the progress of torch versions for everyone.

I hate submodules, so I’m happy to avoid them haha. I only included them encase people had concerns about availability (fetchers don’t work when the host is down).

That’s not a totally inaccurate summary, but it does miss the main point

  • Yes, hydra would be the integration test that could auto-file issues.
  • And yes, easier to move along the timeline
  • But the real point was @samuela’s problem “The current system is simply not sustainable for downstream package maintainers, […] no prior notice, no migration plan, and no alerting of failures. How is a package maintainer expected to reliably function in this environment?”.

The multi repo + nixpkgs waves hopefully is that^ environment we could reliably function in.

  • Once a nixpkg wave “hits” torch, the upstream dependencies should be frozen & green
  • Torch is fixed/repaired one wave at a time, instead trying to stay green on unstable
  • Most importantly; breakages for a wave are no longer as urgent/stressful for a maintainer, because its still practical for people to get stable bleeding edge versions from the repo directly.
  • Testing bleeding-edge torch releases (not a part of any wave) let maintainers keep their sanity, while partly pre-testing for the next wave.

Yes, the same thing is true for torch pinning against nixpkgs; Ideally pin against releases, or waves, but can pin against any commit.

As a side note, I’ve been pinning against random nixpkgs commits for years. Even for big cuda+python+node+ruby+zsh+rust+cpp+docker nix-shell projects. It’s worked great. I often need very specific versions of tools and pinning is the only way to get them (I use lazamar’s tool). But even outside of that, all the time I’ll just go to github, get the latest nightly commit and use it for grabbing the bleeding edge of a single package. I’ve probably got +150 different simultaneous nixpkg tarballs, and my nix store is still smaller than 1 modern video game.

5 Likes

edit: sorry, misunderstanding.
to jeff-hykin, waves flow from provider to consumer. (“top-down pinning with controlled propagation”)
to me, expensive consumers should pin their dependencies. (“bottom-up pinning”)
(i assume that only few packages are expensive consumers = long build times)

“bottom-up pinning”
“multversion” pattern

somepkg-2 and otherpkg-2 are NOT compatible with each other
but we want to have the newest version of both

solution: make more use of the “multversion” pattern
= maintain multiple versions of one package in one version of nixpkgs
= give more packages the “privilege”
to pin their dependencies
to fix breakages (or to prevent expensive rebuilds)

these newest versions “lead” the waves (aka dependency trees)

challenge:
python env’s allow only one version per package
→ only one scope/env

compare:
node allows nested dep-trees by default
→ different nodes can use different versions of the same package
→ every node has it’s own scope/env

again: Allowing Multiple Versions of Python Package in PYTHONPATH/nixpkgs

the goal is to pin transitive dependencies to old versions
while allowing to import new versions in the root scope

edit: on the python side, this “one package, one version” approach is considered a feature (not a bug), as this allows passing data between libraries. when using different versions of one package, we must convert data formats on runtime, which is slow

I’d say its it’s much more common than not that when two packages depend on one or more of the same packages deeper in the stack (Numpy, SciPy, Pandas, Matplotlib, Cython, Sympy, xarray, Dask, etc), there’s some form of direct or indirect data interchange, or other cross-dependency between at least one pair of them, if not most. In particular, when they are used, numpy arrays, pandas dataframes, xarray objects are routinely exchanged, and code compiled with different Cython versions (if they actually merited a hard dependency non-overlap) may well be ABI-incompatible and cause a C-level hard crash.

As such, it seems likely that this will further break as many packages as it will fix, and in ways that can be far harder to debug and recover from than a simple dependency conflict on installation, this does not really seem to be a viable solution, relative to other strategies.

1 Like

node allows nested dep-trees by default

Just a brief comment, although I do not claim to have already understood you, and I definitely haven’t yet comprehended @jeff-hykin’s post: npm’s uncontrollable “explosion” of the dependency graph seems to be explicitly a situation that nixpkgs wants to avoid.

Sure there are more reasons, and other maintainers can provide many more examples, but I can at least affirm that mismatches in cuda versions brought into scope by different dependencies have been a source of issues with e.g. pytorch in nixpkgs. Now cudaPackages provide multiple versions of same packages, but they are used in a very constrained manner: for example the downstream derivations consume the whole cuda package set, rather than individual packages, which is intended to limit the overrides to “meaningful” combinations. And they are small. And they are one of the exceptions

1 Like

Here’s a diagram that will hopefully explain the orignal idea better
(not a response to recent messages, just me catching up)

  • Pink and green are the main points/purpose
  • Every circle is a commit (unimportant commits are not visible though)
  • Curved dotted lines are dependencies.
    Purple/Blue = nixpkgs-depending-on-torch
    Gray = torch-depending-on-nixpkgs

There’s lots of possible variation I wanted to show, but the visualization gets complicated quick.
I’ll make more visualizations for edgescases we want to discuss. (And I talk about some of them below)

  • *Observations and Possibilities
    • The only edgecase visualized is the blue dot (wave-2-3), because I thought it was important.

      In theory, wave-2-3 was a python update that broke torch 1.10.0.
      In this hypothetical, the easiest way to get “all-green” again was just patch the old wave-1-5 tag (purple) and create the patched-commit (the blue dot itself).

      This is importatnt because its different from wave-2-5. Wave-2-5 is specifically trying to update torch to the latest version (1.10.3).
      (Wave-2-5 is the wave that “hits” pytorch)

    • The part that says “glibc, … (and downstream fixes)” the blue dot/wave-2-3 is an example of such a downstream fix

    • Notice nixpkgs never connects (purple line) to torch 1.10.2. That’s intentional.
      (In this example, torch 1.10.2 can only be obtained from the torch repo directly)

    • Torch 1.10.1, 1.10.2, 1.10.3, show all their dependencies (gray dotted lines) going to wave-1-5.
      This was visually convienient and is plausible, but is not required.
      In practice, I’d expect them to point at the latest finished wave. For example
      torch 1.10.3 => wave-2-1
      torch 1.10.2 => wave-1-10
      torch 1.10.1 => wave1-6
      “which wave?” would be a choice of the torch maintainer
      (and it would be done based what dependencies torch needs).

    • Multiple versions, like we have for LLVM, are fine (or at least are not worse than the current situation).
      For example, torch1_10_3 and torch1_10_0 could both be in nixpkgs.
      Instead of one purple to wave-2-5,
      there would be two purple lines,
      one pointing to the wave-2-5 tag (renamed to “wave-2-5__1.10.3”)
      and the 2nd line pointing at an older commit (with a wave-2-5__1.10.0 tag)

12 Likes

keep in mind that misses important security patches and for example mixing multiple glibc is usually bound for trouble.

Sounds cool for a side project for people interested in data science and AI but I personally don’t see the fit for nixpkgs.

I know there’s a lot of stuff you handle that keeps you busy, if you get the chance to mention why its a bad fit though, I do want to fix/change whatever is bad about the idea. I’m still afraid unmaintainability is going to cause the eventual death of nixpkgs, and I personally want to address that instead of just being cynical or doing one-off improvements. There’s been some good ideas in this thread, but I don’t think I’ve seen an actual plan to address the original problem.

I don’t see how it would help data science/AI :confused:
maybe I should’ve used gimp as an example instead of torch.

5 Likes

I still have lots of questions:

  • When do waves start?
  • When do they stop evolving?
  • What goes onto a wave when it started?
  • How many combinations of waves and package versions are going to be build?
  • What do we do with packages that cause mass rebuilds, release very often und often
    break stuff like hypothesis which releases at least once a week and rebuilds almost everything in python packages?
2 Likes

Thanks, those are really important problems to address, the last one especially.
@-everyone, while Sandro started these, I hope others will follow up on them

I’ll add to the question list too: how should we handle security patches to root packages?

Some visual references of whats in my head when writing the answers:
If we look at nixpkgs as trees (roots are at the bottom, image from nix-visualize)

Waves start at some root node, and iteratively expand towards all leaf-nodes.

  • Major wave: wave-1, O(n) * O(minor-wave)
    • i=0, updates the bottom yellow node, then starts a minor wave at that node (and waits on it)
    • i=1, updates the slightly-less yellow node, then starts a minor wave at that node
  • Minor waves: O(n), n= number of downstream packages
    • given a root node like zlib (orange, left side)
    • i=1, zlib is built and tested
    • i=2, libssh2 (nearest downstream) is built and tested
    • i=?, nix is tested
    • (no more leaf nodes: end)

This ends up a little bit like this animation, big block = major wave, small block = minor wave
the idea: as big wave progresses, the small waves get smaller and smaller

ezgif-3-a90439047a

Q&A

  • When do major waves start?
    It’s a hand-picked decision; Wave-X could be started whenever its realistic/desirable to have a new version of a root package, like a major release of llvm.

  • When do minor waves start? as soon as the previous minor wave is finished.
    (having an wave-X-2pre is fine too if the previous minor wave hasn’t finished)

  • What goes onto a wave when it started?
    This is definitely something I should’ve been more clear about, the “glibc, clang, (etc)” was very vague.

    There’s nothing stopping wave-X-1 from updating multiple packages.
    Ideally a minor wave would update a group of packages that are either
    1: packages that are hyper-related
    like deno’s rust crates; deno-dev, deno-core, deno-ops, deno-ast (which import eachother).
    or 2: packages with already-upgraded upstreams
    So hypothesis would need to be upgraded on a wave before numpy, and numpy on a wave before torch. But torch and pyyaml could probably be upgraded on the same minor wave

    Hyper related groups need to be hand crafted, which I imagine is what would be done for heavyweights like llvm, clang, glibc.
    The already-upgraded upstreams be automated, and better yet, their tests for these could safely be run in parallel.

  • What do we do with packages that cause mass rebuilds and release often?
    Torch represents this kind of package (or at least I meant for it to)

    We know rebuilding/testing on every torch bump is entirely impractical
    So, looking 3 pictures up (the Torch vs Nixpkgs one)
    Take a look at the middle of that orange line in the diagram: the torch 1.10.2 release
    Its not used by any nix-wave; it was never built on nixpkgs or integration tested.

    That was made to represent torch updating too fast; nix had to skip torch versions.
    Replace torch with llvm, gcc, glibc, openssl and its the same story. The world can’t be rebuilt every time they’re bumped, so (to keep an all-green status) versions are inevitably skipped.

    The current Nixpkgs is either similar, or must not be all-green.

  • When do major waves stop?
    Its basically irrelevent, but it would stop by simply running out of downstream packages to update: when every package was updated exactly once.

  • When do minor waves stop?
    Its when all tests for downstream packages are resolved as pass/wont-fix/cant-fix
    I’m sure this can be fudged/bent a bit, similar to how replace-dependency gets around the rules

    Once a package is updated+rebuilt, its not touched again for the rest of the major-wave
    So, to minimize build time, we would want lock-in super heavyweights first
    This is why the first 10 minor waves could take longer to test than the next 100 or 1,000

    => wave-X-1 intentionally rebuilds the world
    (if we’re updating llvm, and we guarentee all-green packages, world-rebuilding seems necessary)
    => wave-X-2 is (approximately) the-world minus packages upgraded in wave-X-1
    => etc

  • How many combinations of waves and package versions are going to be build?
    I’m not exactly sure what this is asking. If this is related to what @milahu brought up, the tldr is probably use whatever current method nixpkgs uses (multiversion), but I’m still working on a full response about that.

  • How should we handle security patches to foundational packages?
    Lets say nixpkgs was working on wave-2-14, meaning openssl was already upgraded and locked-in.
    But then a major vulnerability was found in that version of openssl.

    There’s at least a few options:
    1: Use an existing method like replace-dependency and make all later waves just have the fix without fully testing.
    (probably good for small patches)

    2: It could be patched on wave-2-14, and the world could be rebuilt
    (probably good for medium/big patches that could break stuff)

    3: The current major wave is duplicated (including minor waves), but minor waves are marked as unfinished. This way people can use the patch while waiting on tests to catch up.
    (probably good for security fixes that require major version updates)

    4: Individual environments/users use overrides and pull from the bleeding-edge of the openssl repo
    (probably good for 0-day vulns)

11 Likes