Improving NixOS data science infrastructure: CI for MKL&CUDA

I use NixOS for reproducible builds and ultimately I hope that by packaging scientific software we can contribute to improving reproducibility in data science and make it easier to share scientific applications with complex dependencies.

Compared to conda, arguably the most widely used package manager in data science, Nix has many advantages, but one major shortcomings from a lay user perspective: because hydra does not build against MKL or CUDA, compilation takes hours for packages like pytorch, tensorflow, numpy, and scipy, and many packages fail to build at all, like scikit-learn.

MKL is free to redistribute, so theoretically hydra could build packages against this, which would go a LONG way towards maintaining builds with MKL thanks to continuous integration, and greatly accelerate new builds that could pull from cache.

Nvidia may be a little trickier as it seems we modify the binary currently. Nonetheless it seems feasible that we could find an alternate solution / explicitly get permission from Nvidia to do so.

I think the bigger question is if hydra.nixos.org can build non-free software whose licenses allow redistribution? Even if only permitted in a case by case basis. Depending on your hardware, there is no viable alternative to CUDA or MKL for scientific computing.

13 Likes

@JosW Did you read the post from @tbenst?

It sounds like he is well aware that MKL and CUDA are packaged in nixpkgs, but MKL (and CUDA) are marked non-free, which means hydra won’t build them and you don’t get binary packages.

This is especially bad for things like pytorch, tensorflow, etc, because they take such a long time to build.


@tbenst I think it would be great to have an overlay specifically for scientific computing.

@nh2 recently crowd-funded a build server for statically-built haskell packages:

https://github.com/NixOS/nixpkgs/issues/43795#issuecomment-503048915

(if you continue reading through that thread, there are some follow-up posts about it as well)

Maybe you could organize something similar to this, using cachix and hercules-ci to build/distribute packages? Hopefully there are a lot of other users that would be interested in this enough to contribute to it financially.

4 Likes

Note that to try running your own binary cache dedicated specifically to builds with MKL you do not even need the heavy stuff like Hydra/Hercules-CI/Cachix. You can just nix-build whatever package set you care about from time to time, then push it to a completely static binary cache server.

3 Likes

I use my own overlay for scientific software, which contains some heavy builds. The head node of my cluster acts as a cache using “nix-serve”. That way I only need to build it once and all compute nodes just need to pull from the local cache.

2 Likes

Nothing to add, except that the current status quo also results in a lot of builds for us. I have been precompiling libtensorflow (since we need several CUDA capabilities across various servers) and do binary distributions in a similar vain to libtensorflow-bin. But it’s not ideal.

1 Like

This probably isn’t quite ready for prime time yet, but we do have tensorflow and pytorch building with ROCm for AMD GPUs over at nixos-rocm, which is open source. I say it’s not ready because I still use a downloaded wheel for tensorflow since I haven’t been able to get the full bazel build working, and because ROCm support in pytorch is missing the newer native extension support which means that some pytorch packages people might expect won’t work.

As the ROCm ports mature, the question of ease-of-installation looms larger. These are quite heavy packages:

$ nix path-info nixpkgs.pytorch-rocm -hsS
/nix/store/pykwhwnksylhm1h0gl1pkifh4dswgskz-python3.7-pytorch-1.0.0        1.1G    3.6G

I have long-intended to get cachix sorted out for nixos-rocm, but haven’t had the opportunity to figure it out. Any help in standing that up would be appreciated, as well as any help in figuring out if we could do CI anywhere (it takes on the order of 8-12 hours for me to do a full re-build; uploading closures of that size would also stretch the limits of practicality for me).

3 Likes

It looks like someone did the good deed of opening an issue about this on nixpkgs:

https://github.com/NixOS/nixpkgs/issues/75460


Also, I just found out about this today, but @stites has a repo and binary cache for pytorch related packages, compiled with cuda, mkl, etc:

1 Like

Speaking of patching the binaries, could it be possible to provide an alternative setup, where we would wrap all the downstream binaries with a correct $LD_LIBRARY_PATH addition? Interpreter patching can definitely be replaced with a wrapper.

Then a third party binary cache distributing the CUDA-based builds might be legally cleaner than Nvidia itself. I guess the CUDA users have, unlike Hydra maintainers, read the CUDA license and have easier time to say what is legal…

1 Like

If you need help setting up cachix/hercules-ci let me know :slight_smile:

2 Likes

Happy to see I’m not the only one who needs this :slight_smile:

While waiting for a long-term solution with Hydra, @stites solution seems the best. (thx btw!)

We need to setup the same kind of thing for tensorflow… I’m not sure I’m capable of it right now but pytorch-world seems like a good inspiration. Would it be hard to adapt it for tensorflow ?

It looks like edolstra made an exception for unfreeRedistributableFirmware to be included in the channel, which I presume means binary builds on hydra.nixos.org.

I’d like to see this done for licenses.issl (currently one package: MKL) as well.

For nvidia, we would need to update the derivation as @7c6f434c suggests, and make the license the more precise: most nvidia software is licensed under the more restrictive License Agreement for NVIDIA Software Development Kits but for the software we’re discussing, redistribution is more permissive via CUDA Supplement to Software License Agreement and cuDNN Supplement to Software License Agreement. My understanding is this is how conda is able to distribute these packages

I tried setting up hydra but got stuck if someone can help: "DBIx::Class::Schema" is empty. · Issue #696 · NixOS/hydra · GitHub. Still, the real solution would be for hydra.nixos.org to run this

3 Likes

Nix has a real opportunity to shine here, because alternative packaging options for scientific computing libraries are a bit of a mess. Things like conda and pre-baked Docker containers can work decently well for beginner “getting started” workflows, but most users eventually want to do things like compile for the specific hardware they have to run at scale, or tweak the library versions in the stack to upgrade/downgrade and mix in other Python/C/C++ libraries, etc., and here Nix is fantastic.

The machine learning / data science community is fairly large, as well. Providing a seamless MKL + CUDA + PyTorch/Tensorflow experience would be great!

6 Likes

Note that firmware has many exceptional properties. If we want to have a widely useful release CD, we need to redistribute some of the firmware via a link from the main homepage. If we want to make CUDA usable, adding an extra binary cache is one line in the configuration that doesn’t need to be there during the initial setup. Also, firmware doesn’t need any fancy handling related to Nix, and the kernel upstream is pretty trustable to keep track of legality. More than Nvidia, and I guess more than ourselves. So I advise you to plan as if the main binary cache will never carry CUDA (of course ROCm should be easy once a few people say they use it regularly).

Splitting CUDA into parts according to the license sounds like a good idea to me.

I cannot help with Hydra setup, but as I said — you could start with nix build and nix copy, and the expression used for nix build will be reused by Hydra once you have it running.

For what it’s worth, tensorflow 2.0 has support for making cuda an optional runtime dependency. So we could get away with building a cuda-capable tensorflow without redistributing cuda. I think cuda still needs to be present at build time, but I’m not entirely sure about that.

1 Like

Hydra distributes all build dependencies, though. But maybe there is a way that doesn’t need actual CUDA and a mock would be enough.

Appreciate the productive & solution-oriented discussion :smiley: . Certainly a cachix-style solution that all data-scientists could just add to their configuration.nix would be great. But that doesn’t address the CI/CD problem that hydra solves so beautifully for an entire OS. Anyone that has tried to have an overlay with numpy.blas = mkl knows this pain. In the one year I’ve been using NixOS, I’ve never succeeded in building python3.withPackages (ps: with ps; [ numpy scipy pytorch sklearn ]) on unstable. I think this is largely because the maintainer test burden on reverse dependencies is high due to extremely long builds.

I completely understand why NixOS is free-sofware only by default. I think Ubuntu provides a good example here, where main & universe are Free & open source, however users can opt in to restricted–which provides binaries for CUDA–as well as multiverse which provides binaries for MKL. My understanding is multiverse also provides builds against these libraries e.g. caffe-cuda, although I haven’t installed myself.

I think canonical is trustable to keep track of legality. And not to mention that RHEL/CentOS, Fedora, SLES, and OpenSUSE also redistribute CUDA. Conda does as well, and if I’m not mistaken, so does pip.

I’m confident we are in clear, legal territory if we wrap rather than modify, but if the main concern is licensing, I’m happy to reach out to Nvidia for clarification as it pertains to NixOS / hydra and report back.

Splitting CUDA into parts according to the license sounds like a good idea to me.

Just to clarify, CUDA itself is under one license, but separate CUDA libraries like cuDNN have slightly different supplements. Modern deep learning frameworks like PyTorch and TensorFlow depend on both CUDA and cuDNN. The only other Nvidia dependency I’m aware of for PyTorch is nccl, which has permissive licensing.

2 Likes

RHEL/CentOS, Fedora, SLES, and OpenSUSE also redistribute CUDA

We would be doing a slightly different thing, which is probably fine, but someone needs to make the call. Note that for mainline Hydra CUDA is nothing special, so the maintainers need to be able to make such calls uniformly and reliably… I think for a few years we only built non-branded Firefox on Hydra because of the minor patches for NixOS layout of stuff that were declared definitely fine as soon as we actually started discussing with Mozilla. Extrapolating from that precedent.

Speaking of nontrivial things: changing the layout during redistribution might make it verbatim redistribution of consituent parts, not verbatim redistribution of the original work in its entirety. I have no idea and I have no desire to bear responsibility for calling it either way.

A separate collaboration of CUDA and MKL users does single out CUDA, can delegate the calls to people having read CUDA and MKL licenses in details, etc.

the CI/CD problem that hydra solves so beautifully for an entire OS

I think CUDA is large enough that you should stop thiniking in terms of entire OS. Nix means that you can have a separate branch that stabilises some set of the related heavy things, and install the development environment for your data pipeline from there — without breaking your system. A bit suboptimal, but any other solution means keeping your ground against the huge churn.

I think this is largely because the maintainer test burden on reverse dependencies is high due to extremely long builds.

Note that this also means that ofBorg would also time out, and nobody would be surprised, and not everyone would care.

I know RFC46 is not yet accepted, and it is about platforms and not individual packages, but I like the terminology we worked on there. Trying to keep everything heavy around Python data crunching green on unstable would mean asking for Tier-2 impact on merges for things that are not really served at the Tier-2 level by existing PR testing tooling. Actually, @timokau achieves this for Sage by doing a huge amount of high-quality work and being always there to help whenever any question about Sage impact arises. In the case of Sage, though, there is a lot of actual investigation to be done.

If there are enough people interested in doing the reviews and investigations and fixes for master PRs to the things related to the data crunching packages, just having dedicated build capacity with unprivileged accounts for everyone in the response team could be enough, Note that you need people to iterate quickly on proposed fixes, which probably means nix build is quite useful regardless of a Hydra instance.

Of course, once you have a reputation for keeping up with debugging the relevant PRs, an explicit OK from Nvidia and a binary cache (possibly lagging a few weeks) with a significant user base, you might have an easier time convincing people to make an exception. On the other hand, at that point this exception will be a smaller improvement than it would be right now.

On the other hand, I guess the first steps are the same regardless of the final goal: refactor to increase the chance of legality of redistribution, try to get Nvidia’s comments, organise a debugging team.

3 Likes

Until nixpkgs is doing CI properly; all pacakges need to be green in order to pass, this is just chasing tail.

I have the same problem with maintaining cachix, haskell packages are bumped and failing packages are just marked as broken. I don’t have a chance to fix mine, resulting in a few angry people every few weeks.

I think the short-term solution is to use a separate distribution (separate github repository, with a new binary cache, CI, etc) where you can control the whole pipeline.

I’ve seen a few people interested into data science repository, so getting a budget to host the agents should be possible.

Long term, nixpkgs has to change the process. I’m very much for linux kernel like workflow, that would enable everyone to work with people they want to and it scales as linux kernel has proven. Enforcing CI to be green across all PRs into a single branch just won’t scale, unless we get crazy amount of computing resources we can waste.

3 Likes

I’ve been building the TensorFlow stack recently for various combinations of python, optimization settings, and systems, pushing to cachix and a private bucket. It would help usability to know what attribute set works, but it requires enough people to curate and update.

Some of the packages in this space require large builds. Im considering to experiment with cached builds with ccache or Bazel.

1 Like

Long term, nixpkgs has to change the process. I’m very much for linux kernel like workflow, that would enable everyone to work with people they want to and it scales as linux kernel has proven.

I am not sure that the impact distribution is similar enough to blindly trust that scaling data; we might have a larger ratio of commits changing substantially the parts with large impact, we cannot fully control the impact of our changes even in principle (let’s call a spade a spade here), a single full rebuild of Nixpkgs takes days instead of tens of minutes for everything-enabled kernel.

And for the low-rev-depth changes, which are similar in impact distribution to what happens with Linux wireless of BtrFS or whatever, building a PR doesn’t include a large build anyway, even now.