Petition to build and cache unfree packages on cache.nixos.org

If this were the case then tensorflow, pytorch, jaxlib, and every other package on PyPI that ever links against CUDA would be illegal. I am also not a lawyer, but I’m fairly certain that Google/Facebook’s lawyers must have signed off on their distribution.

1 Like

Relevant thread:

2 Likes

I understand this is around 2 years old but it seems like the issue is still ongoing especially for tensorflowWithCuda. I even tried using https://cuda-maintainers.cachix.org but it still forced a full recompile of everything when setting cudaSupport = true.

I went around it a different way by changing my tensorFlow derivation to use the wheel

  mtensorflowWithCuda = buildPythonPackage rec {
    pname = "tensorflow";
    version = "2.14.1";
    format = "wheel";

    src = fetchurl {
      name = "${pname}-${version}-py3-none-any.whl";
      url = "https://files.pythonhosted.org/packages/99/77/4f31cd29cab69ebc344a529df48b91a14543a83b6fb90efbf82db29a34be/tensorflow-2.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl";
      sha256 = "sha256-mpVcQhZO/011FzLBJ0ykvwWdtgyeI2IJjOHu1xd8P+k=";
    };

    nativeBuildInputs = [ pyPkgs.wheel ] ++ lib.optionals stdenv.isLinux [ autoPatchelfHook ];

    # tensorflow/tools/pip_package/setup.py
    propagatedBuildInputs = with pyPkgs; [
      absl-py
      # abseil-cpp
      astunparse
      flatbuffers
      gast
      google-pasta
      grpcio
      h5py
      keras-preprocessing
      numpy
      opt-einsum
      packaging
      # protobuf-python
      six
      tensorflow-estimator-bin
      termcolor
      typing-extensions
      wrapt
      scipy
      dm-tree
      # No longer in 310 packages, had to be copied
      # from upstream's 311 packages
      (pyPkgs.callPackage (import ./mldtypes.nix) { })
      (pyPkgs.callPackage (import ./keras.nix) { })
    ] ++ lib.optionals withTensorboard [
      tensorboard
    ];

    # During installation it can't find the deps provided above
    # but if we disable this, can assert the module works after
    pipInstallFlags = "--no-deps";

    postFixup = ''
      find $out -type f \( -name '*.so' -or -name '*.so.*' \) | while read lib; do
        # addOpenGLRunpath "$lib"
        echo [MANUAL] patching $lib

        patchelf --set-rpath "${cudatoolkit}/lib64:${cudatoolkit.lib}/lib:${cudnn_8_7}/lib:${nccl}/lib:$(patchelf --print-rpath "$lib")" "$lib"
      done
    '';

    doCheck = false;
  };

I had to override a couple of packages to get specific versions that work with TF 2.14, but it worked ok so far and did not require a full recompile.

I’ve shared this for two reasons

  1. In case anyone else was still looking for a hacky solution without switching to virtual env and keep a full nix build
  2. See if we can take advantage of wheels more often to avoid building from source, as great as it would be to have a build script from source, for certain components like this that aren’t cached by Hydra, its infeasible and would be great if we can pull the wheel along with its dependencies (it took me nearly a full day to get to this point, mainly stuck with getting cuda bindings to eventually work)
3 Likes