Review of #333575

Sorry, I have to start a new thread for this because discourse won’t let me post more than two consecutive messages to any other thread.

From that PR,

useGccForLibs really ought to be called “take libgcc_s.so from gccForLibs instead of expecting to find it in stdenv.cc”. This happens when you use clang on a glibc system like NixOS, because glibc expects libgcc_s (even when you aren’t using gcc to build glibc). Unfortunately since these argument names are exposed to code outside of nixpkgs, picking more accurate names for them is a breaking change, which is a headache. The name was chosen before I started working on cc-wrapper and I never had the energy to orchestrate a migration to a better name (with lib.warn for a release cycle, etc).

Anyways, it looks like, over the past six months, this mechanism has been (ab?)used for several other things, like stdenvAdapters.useLibsFrom, which is a complex beast that seems to be used in only two places, both for CUDA.

I’d like to reiterate that the purpose of stdenv.cc to compile C/C++ programs that run on a CPU.

We don’t expect stdenv.cc to synthesize Verilog to run on FPGAs (that’s what yosys is for). Can you imagine the explosion of complexity if we tried to make stdenv.cc handle Verilog synthesis for FPGAs? I think trying to use stdenv.cc to compile code that runs on a GPU is like that. Personally, I don’t think we should be allowing a compiler for non-CPU targets (especially one which compiles a single-vendor proprietary language!) to impose these complexity burdens on stdenv and cc-wrapper, which are very high-traffic areas. The CUDA compiler should be its own compiler package, separate from stdenv.cc, just like rustc and ghc are.

But frankly, nowadays I’m more interested the stdenv2 nixskell reboot. A real type system, build/host chiral separation so native builds break whenever cross builds break, no more splicing, checkPhase in a separate derivation, multiversion packages.

So, realistically, nixpkgs is a do-ocracy; if you’re willing to take responsibility for that complexity burden – and you’re sure you know what you’re getting into – then it’s your call to make.

1 Like

Thanks for the reply

I see. I’ll add a docstring explaining the origins of the argument.

:+1: That’s how the cuda subsystem uses cc-wrapper: it’s used for compiling the host part of the program, not the gpu kernels. However, nvcc is particular about versions of the host compiler it can speak to, which is why sometimes use gcc11/12 even when the nixpkgs default is gcc13, and why we have to jump the hoops with libstdc++.

Agreed, but I do think that “don’t take libXXXX from stdenv.cc, take it from elsewhere” is in fact a more general need than just the cuda stuff. E.g. I’d wish to be able to close Build against an older libc, keep the newest in the runtime closure · Issue #303405 · NixOS/nixpkgs · GitHub one day (unless there’s a better path for wider libc compatibility), and I think that would take a significantly finer (and so possibly more complex) logic in cc-wrapper

First time I hear of “stdenv2”, but the goals sound very relatable

:+1:

But frankly, nowadays I’m more interested the stdenv2 nixskell reboot. A real type system, build/host chiral separation so native builds break whenever cross builds break, no more splicing, checkPhase in a separate derivation , multiversion packages .

@amjoseph stdenv2 sounds interesting! Where is that work?

The multiversion packages grabbed my attention! Can you show us something, @amjoseph?

From Implicit attribute defaults/overrides inside package definition · Issue #273534 · NixOS/nixpkgs · GitHub

Hrm, the links to the proof of concept broke when I nuked my github account.

I’ve pushed the proof-of-concept here to keep it available:

1 Like

Yes but that “host part” isn’t C/C++. If it were, we could simply compile it using gcc or clang.

I’d also like to point out that rocmPackages works great (I use it to run llama.cpp on my Vegas) and required zero changes to stdenv. Why can’t CUDA do the same thing?