I’m new to NixOS and recently set up a PC to develop ML models using PyTorch with CUDA support. I’ve been experimenting with a few nix-shell configurations, but I keep running into issues with long build times, especially with compiling CUDA and other dependencies.
Here are the configurations I’ve tried:
Configuration 1:
let
# Pin to a specific nixpkgs commit for reproducibility.
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/24bb1b20a9a57175965c0a9fb9533e00e370c88b.tar.gz") {config.allowUnfree = true;};
in pkgs.mkShell {
buildInputs = [
pkgs.python311
pkgs.python311Packages.torch-bin
pkgs.python311Packages.unidecode
pkgs.python311Packages.inflect
pkgs.python311Packages.librosa
pkgs.python311Packages.pip
];
shellHook = ''
echo "You are now using a NIX environment"
export CUDA_PATH=${pkgs.cudatoolkit}
'';
}
{ pkgs ? import <nixpkgs> {
config = {
allowUnfree = true;
cudaSupport = true;
};
} }:
pkgs.mkShell {
# nativeBuildInputs is usually what you want -- tools you need to run
nativeBuildInputs = with pkgs.buildPackages; [
python311
cudaPackages_12.cudatoolkit
python311Packages.pytorch-bin
python311Packages.pip
];
shellHook = ''
echo "You are now using a NIX environment"
export CUDA_PATH=${pkgs.cudatoolkit}
'';
}
Issue: In all cases, I end up with long build times because of the compilation of CUDA, PyTorch, and other dependencies. The builds can take over an hour, which is quite frustrating.
Question: Is there a way to set up a development environment for PyTorch with CUDA on NixOS without having to compile everything from scratch? Are there any pre-built binaries or optimized configurations that I can use to speed up this process? ( I have tried torch-bin, as suggested here, but it still was compiling everything in the same way )