By the way, I believe the cudatoolkit derivation can also be reworked to make an accidental redistribution less likely:
- Currently
cudatoolkitputs.soand.ain the same output. We could move.ainto a separate"static"output to secure that somepytorchdoesn’t for whatever reason randomly choose to link a piece of cuda statically - There are
preferLocalBuild = trueandallowSubstitutes = falseoptions that, I understand, could tell nix not to try to look for a built derivation in the remote cache.
(I’d open an issue in nixos/nixpkgs right away, but I haven’t done enough research)