Build rust crate candle with CUDA support

Hello,
I’m trying to build a rust application that depends on candle, with CUDA support. It fails with the following message

error: failed to run custom build command for `candle-kernels v0.9.1`

Caused by:
  process didn't exit successfully: `/home/adri/er/gni/target/debug/build/candle-kernels-75b380d8c758cf5a/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-changed=build.rs
  cargo:rerun-if-changed=src/compatibility.cuh
  cargo:rerun-if-changed=src/cuda_utils.cuh
  cargo:rerun-if-changed=src/binary_op_macros.cuh
  cargo:info=["/usr", "/usr/local/cuda", "/opt/cuda", "/usr/lib/cuda", "C:/Program Files/NVIDIA GPU Computing Toolkit", "C:/CUDA"]
  cargo:rerun-if-env-changed=CUDA_COMPUTE_CAP
  cargo:rustc-env=CUDA_COMPUTE_CAP=120
  cargo:info=Builder { cuda_root: None, kernel_paths: ["src/affine.cu", "src/binary.cu", "src/cast.cu", "src/conv.cu", "src/fill.cu", "src/indexing.cu", "src/quantized.cu", "src/reduce.cu", "src/sort.cu", "src/ternary.cu", "src/unary.cu"], watch: [], include_paths: ["src/binary_op_macros.cuh", "src/compatibility.cuh", "src/cuda_utils.cuh"], compute_cap: Some(120), out_dir: "/home/.../target/debug/build/candle-kernels-47e2fa6b8aeb9b17/out", extra_args: [] }

  --- stderr

  thread 'main' panicked at /home/.../.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/bindgen_cuda-0.1.5/src/lib.rs:306:40:
  Could not find CUDA in standard locations, set it manually using Builder().set_cuda_root(...)
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I guess the issue is that there are some hardcoded paths somewhere telling where to find the CUDA compiler

MWE:
in flake.nix

{
  description = "gen";
  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable";
    utils.url = "github:numtide/flake-utils";
    rust-overlay = {
      url = "github:oxalica/rust-overlay";
      inputs = {
        nixpkgs.follows = "nixpkgs";
	flake-utils.follows = "utils";
      };
    };
  };
  outputs = { nixpkgs, utils, rust-overlay, ... }:
    utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
	        inherit system;
          config = {
            cudaSupport = true;
            allowUnfree = true;
          };
	        overlays = [ rust-overlay.overlays.default ];
	      };
	      rust = pkgs.rust-bin.stable.latest.default.override {
	        extensions = [ "rust-src" "clippy" ];
	      };
      in {
        devShell = with pkgs; mkShell {
          buildInputs = [
	          rust
	          cargo
	          rustfmt
            cudaPackages.cuda_nvcc
	          rustPackages.clippy
	          rust-analyzer
            pkg-config
          ];
        };
      });      
}

then cargo add candle-core --features cuda and cargo build.

Although it shouldn’t matter here, then cudaPackages.cuda_nvcc should belong inside nativeBuildInputs. Also does adding cudaPackages.cudatoolkit to buildInputs change anything btw?

If not then maybe buildFHSUserEnv could help (but I don’t have experience with it personally)

1 Like

Indeed, buildFHSEnv helped. Now I am stuck with an other error: sh: line 1: /usr/bin/../nvvm/bin/cicc: No such file or directory. The derivation of cudaPackages.cuda_nvcc does contain a /nvvm/bin/cicc file, but apparently it doesn’t get copied over by buildFHSEnv. Not sure how to fix this.

How did this experience turn out? I was trying to (get claude to) port a Python image classifier into Rust and a nix flake but it sounds like things are about to hit a snag unless I upstream a change to candle?

EDIT: I got to basically the same point as the original post - needing FHS - then gave up. It’s a shame we need to upstream some work if we want progress here.

See

You can set one of those variables.

e.g.

          env.CUDA_ROOT = toString (
            pkgs.buildEnv {
              name = "cuda-root";
              paths = [
                cudaPackages.cuda_cudart.dev
              ];
              pathsToLink = ["/include" "/bin" "/lib"];
            }
          );

or env.CUDA_ROOT = "${cudaPackages.cuda_cudart.dev}";