I am working on a Pytorch project and want to use my Nvidia drivers. However in order for the project to work I must supply the nvidia drivers that I am currently using on my machine. I have found the drivers in my nix store under /nix/store/v7w8rn0m2a0s4n1309671sibs149gbvp-graphics-drivers/lib/
. If I manually add those to the LD_LIBRARY_PATH
and EXTRA_LDFLAGS
then I can run the project. However, I have no idea how I can automatically detect where those drivers are located. I have now hard-coded it (see comment in flake), but that only works on my machine of course…
I have already tried to use export NVIDIA_DRIVERS=${pkgs.linuxPackages.nvidia_x11}/lib
, but then I get a message from Nvidia that those drivers are not the same ones as are currently running.
I have also tried using buildFHSEnv
and then it works, but then I have other problems (for example I can not git push in that shell anymore, because ~/.ssh/config
is all of a sudden of the nobody
user and ssh doesn’t like).
Any help would be greatly appreciated!
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = {
self,
nixpkgs,
flake-utils,
...
}:
flake-utils.lib.eachDefaultSystem (system: let
pkgs = import nixpkgs {
inherit system;
config.allowUnfree = true;
config.cudaSupport = true;
};
lib-path = with pkgs;
lib.makeLibraryPath [
stdenv.cc.cc
libGL
glib
cudatoolkit
];
in {
devShells = {
dev = pkgs.mkShell {
name = "impureVenv";
venvDir = "./.venv";
buildInputs = with pkgs; [
python3
python3Packages.venvShellHook
];
postVenvCreation = ''
pip install -r ${./requirements.txt}
'';
shellHook = ''
if [ -f "/etc/NIXOS" ]; then
export NVIDIA_DRIVERS=/nix/store/v7w8rn0m2a0s4n1309671sibs149gbvp-graphics-drivers/lib/ # <- want to get rid of this hard-coded line
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$NVIDIA_DRIVERS:${lib-path}"
export EXTRA_LDFLAGS="-L/lib -L$NVIDIA_DRIVERS"
fi
venvShellHook
'';
};
};
devShell = self.devShells."${system}".dev;
});
}