I usually use flake-based remote OS deployment and it is a rather common occurrence that even with small changes (such as adding or removing a single package) the build takes very long time (minutes). Most often the longest build steps are building man-cache and building steam-fhs-run, with the former being much more prevalent.
Note that I mostly use deploy-rs but using nixos-rebuild build --flake behaves equally in this respect.
Interestingly enough, these build steps look rather out of context for some changes which have nothing to do with Steam (or FHS) and don’t contain man pages.
Honestly, I don’t even know where to start with debugging so any tips in this direction are welcome.
Do you have fish? By default it enables documentation.man.generateCaches (because some completion assumes a working man -k), the problem is that it is unbearably slow and has to be rebuilt on every change to environment.systemPackages and the like.
First you need to figure out what exactly it’s waiting on because steam-fhs-run is a trvial builder that basically just writes a pre-determined file. I doubt it’s taking any longer than a few milliseconds.
First figure out which exact derivations are taking such a long time to build and what they do internally. Nix should be printing their respective drv paths and you can introspect those using nix derivation show.
(Re-)Build those derivations in isolation in order to confirm they actually take a long time to build rather than finishing instantly and something else in the build graph actually taking a long time without printing anything. Speaking of which, to figure out whether it’s the second case, does it print anything after the long wait time?
The next thing I’d do is monitor the system resource usage during the abnormally long wait time to figure out which system resource is the botteleneck: CPU, disk IO or network IO.
That’s kind of my fault as I added this feature. Personally I would have left it off, but IIRC another fish maintainer insisted on having it by default.