I’m new to NixOS and recently set up a PC to develop ML models using PyTorch with CUDA support. I’ve been experimenting with a few nix-shell configurations, but I keep running into issues with long build times, especially with compiling CUDA and other dependencies.
Here are the configurations I’ve tried:
Configuration 1:
let
# Pin to a specific nixpkgs commit for reproducibility.
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/24bb1b20a9a57175965c0a9fb9533e00e370c88b.tar.gz") {config.allowUnfree = true;};
in pkgs.mkShell {
buildInputs = [
pkgs.python311
pkgs.python311Packages.torch-bin
pkgs.python311Packages.unidecode
pkgs.python311Packages.inflect
pkgs.python311Packages.librosa
pkgs.python311Packages.pip
];
shellHook = ''
echo "You are now using a NIX environment"
export CUDA_PATH=${pkgs.cudatoolkit}
'';
}
{ pkgs ? import <nixpkgs> {
config = {
allowUnfree = true;
cudaSupport = true;
};
} }:
pkgs.mkShell {
# nativeBuildInputs is usually what you want -- tools you need to run
nativeBuildInputs = with pkgs.buildPackages; [
python311
cudaPackages_12.cudatoolkit
python311Packages.pytorch-bin
python311Packages.pip
];
shellHook = ''
echo "You are now using a NIX environment"
export CUDA_PATH=${pkgs.cudatoolkit}
'';
}
Issue: In all cases, I end up with long build times because of the compilation of CUDA, PyTorch, and other dependencies. The builds can take over an hour, which is quite frustrating.
Question: Is there a way to set up a development environment for PyTorch with CUDA on NixOS without having to compile everything from scratch? Are there any pre-built binaries or optimized configurations that I can use to speed up this process? ( I have tried torch-bin, as suggested here, but it still was compiling everything in the same way )
opengl in enabled via configuration.nix, remaining required packages can be installed via pip, i recommend using direnv too if somebody can point out negatives of this usage please let me know.
All the mentioned hacks like LD_LIBRARY_PATH (nix-ld is the safer way to implement that), FHS namespaces, and temporary “patchelfed” shims such as torch-bin are all viable ways to make PyPi software work. Be advised, however, that this is fundamentally different from and less reliable than actually using the “real” nixpkgs, which supports overlays and overrides and attempts to make packages complete and correct by construction.
It’s also maybe worth mentioning that the page was written long time ago and might (well, it does) contain outdated information.
Hi, I really like your setup because I prefer managing packages via a venv, but I can’t set the device to GPU inside pytorch. Do you configure something other than opengl in configuration.nix? Is there something I could be missing? I’m using this exact flake.
a right, here’s part responsible for it in config, tbh i don’t remember source of it.
{ config, lib, pkgs, ... }:
{
# Enable OpenGL
hardware.opengl = {
enable = true;
};
# Load nvidia driver for Xorg and Wayland
# services.xserver.videoDrivers = ["nvidia"];
hardware.nvidia = {
# Modesetting is required.
modesetting.enable = false;
# Nvidia power management. Experimental, and can cause sleep/suspend to fail.
# Enable this if you have graphical corruption issues or application crashes after waking
# up from sleep. This fixes it by saving the entire VRAM memory to /tmp/ instead
# of just the bare essentials.
powerManagement.enable = false;
# Fine-grained power management. Turns off GPU when not in use.
# Experimental and only works on modern Nvidia GPUs (Turing or newer).
powerManagement.finegrained = false;
# Use the NVidia open source kernel module (not to be confused with the
# independent third-party "nouveau" open source driver).
# Support is limited to the Turing and later architectures. Full list of
# supported GPUs is at:
# https://github.com/NVIDIA/open-gpu-kernel-modules#compatible-gpus
# Only available from driver 515.43.04+
# Currently alpha-quality/buggy, so false is currently the recommended setting.
open = false;
# Enable the Nvidia settings menu,
# accessible via `nvidia-settings`.
nvidiaSettings = true;
# Optionally, you may need to select the appropriate driver version for your specific GPU.
package = config.boot.kernelPackages.nvidiaPackages.stable;
};
}
that should be everything, u may need to tweak few options depending on gpu but well, that worked for me
Yes, it was that. Thanks a lot, now it’s working perfectly. To be honest I should’ve read the docs for the GPU issue, so thank you very much for your help.