Install python dependency llama-cpp-python with pip in conda-shell can't locate cudatoolkit

Hi all

I’m currently trying to get a python project running using conda-shell. One of the dependencies is using the library llama-cpp-python with cuda. But unfortunately it can’t find cuda libraries during installation.

First I started a nix-shell and then opened the conda-shell.

nix-shell -p python3 conda libxcrypt-legacy gcc cudaPackages.cudatoolkit cudaPackages.cudnn
conda-shell

Before installing the llama-cpp-python library, I set environment variables for cuBLAS NVIDIA GPU support.

export CMAKE_ARGS=“-DLLAMA_CUBLAS=on”
export FORCE_CMAKE=1
pip install llama-cpp-python==0.1.83 --no-cache-dir --verbose

Install output shows that it can’t find the cudatoolkit.

– Could NOT find CUDAToolkit (missing: CUDA_CUDART) (found version “11.8.89”)
CMake Warning at vendor/llama.cpp/CMakeLists.txt:292 (message):
cuBLAS not found

Any idea how setup my environment correctly?

regards

Just a guess, but try adding cudaPackages.cuda_cudart to the shell? Also, iirc cmake I think takes env variables like CUDA_TOOLKIT_ROOT and CUDA_LIBRARIES that help it locate these CUDA dependencies.

Otherwise the cmake FindCUDAToolkit docs may help debug this issue.

Thanks…
Had to add cudaPackages.cuda_cudart, cudaPackages.cuda_cudart.static and pkgs.cudaPackages.libcublas. And some additional env vars…

export CUDACXX=/nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/bin/nvcc
export CMAKE_ARGS=“-DLLAMA_CUBLAS=on -DCUDAToolkit_ROOT=/nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/bin/nvcc -DCUDAToolkit_INCLUDE_DIRS=/nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/include -DCUDAToolkit_LIBRARY_DIR=/nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/lib64”
export FORCE_CMAKE=1

It seems it finds now cudatoolkit during pip install. But somehow it struggles now in “vendor/llama.cpp/CMakeLists.txt:” with a non existing include path.

– Found CUDAToolkit: /run/current-system/sw/include (found version “12.2.140”)
– cuBLAS found
– The CUDA compiler identification is NVIDIA 12.2.140
– Detecting CUDA compiler ABI info
– Detecting CUDA compiler ABI info - failed
– Check for working CUDA compiler: /nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/bin/nvcc
– Check for working CUDA compiler: /nix/store/074x79p5rvndzhpjkfah41lxhxi8j0rh-cudatoolkit-12.2.2/bin/nvcc - works
– Detecting CUDA compile features
– Detecting CUDA compile features - done
– Using CUDA architectures: 52;61;70
– CMAKE_SYSTEM_PROCESSOR: x86_64
– x86 detected
– Configuring done (1.9s)
CMake Error in vendor/llama.cpp/CMakeLists.txt:
Imported target “CUDA::cudart” includes non-existent path

 "/run/current-system/sw/include"

in its INTERFACE_INCLUDE_DIRECTORIES.  Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.

But i have no clue how to change it. Any idea?

thanks
cyrill