They typically use a module system like lmod, which allows you to switch between different software environments. Many commonly-used scientific packages are provided through the module system as well as newer Python versions, etc.
Rust is easy ;), cluster nodes typically have a shared home directory. So, you can just install Rust with
rustup as is common on non-NixOS systems (and you only need one system/node anyway, since it is often easy to deploy Rust binaries). At any rate, I don’t think Rust has much uptake in HPC/science yet.
The module system is very clunky, but at least they offer offer MKL, CUDA, Intel Compilers, etc. and libraries compiled against those, out of the box.
I have left academia 6 months ago. But when we would run something on a HPC cluster or Grid, we would just build the software on an old CentOS version (whatever version is supported). On larger clusters, like the European E-Science Grid, you can write a job specifications where you can specify a CentOS/Scientific Linux version, and query how many CPUs are available with that OS version. Luckily, the last group I worked in had the funding to buy large machines for ourselves, so I was relieved from old CentOS versions and could install Nix.
At any rate, HPC clusters are mostly Linux like it was 5-10 years ago, and most people just accept that. For some reason, those maintaining clusters are very conservative. I guess at least stuff doesn’t break very often.
I agree though that Nix would be a much better choice, if we improved our scientific software story.