Best practice packaging apps that support both CUDA and ROCm

While submitting https://github.com/NixOS/nixpkgs/pull/482720 I noticed there is also a pkgsCuda and pkgsRocm that contain variants of all package witht the specific backend enabled.

It’s documented in the nixpkgs Manual.

If I understand correctly I can just use pkgsCuda.llama-cpp instead of llama-cpp-cuda and get the same result.

There are several example in nixpkgs that have dedicated -cuda and -rocm variants. But if I understand correctly they could all be replaced by their pkgsCuda or pkgsRocm variant.

ollama-cuda => pkgsCuda.ollama

python3.pkgs.torchWithCuda => pkgsCuda.python3.pkgs.torch

Now my question is what is the best practice? Does it make sense to create a package for llama-cpp-cuda or should I just use pkgsCuda.llama-cpp ?

One benefit of having the explicit package is discoverability on https://search.nixos.org as you wont fine a hint about pkgsCuda there.

1 Like

pkgs* allows you to pass in the correct hardware as a module arg, which I see as a benefit to multi machine repos.