The estimated download size and the number of packages you will compile by yourself is only known after binary caches are queried, so your dry-run operation would still have to query the binary caches. Otherwise it looks reasonable.
Note that --dry-run does what you want after querying the binary cache:
After modifying nix-du in nixpkgs:
$ nix-build -A nix-du --dry-run
this derivation will be built:
/nix/store/5p7jss6d6awa8c9syfflsz16vwnizy98-nix-du-1.0.0.drv
these 51 paths will be fetched (153.85 MiB download, 1125.68 MiB unpacked):
[...]
Ah, so the old nix-build and nix-env already have something like it. I still cannot find the equivalent for nix shell, nix run or nixos-rebuild. I use flakes for all my projects these days. I don’t install things on my profile, always system wide or use shells. Don’t remember last time I used nix-build or nix-env
Network queries or metadata downloads are completely fine. The only thing I dislike is when I run nix shell or nixos-rebuild switch, after a couple minutes I come back to see how it’s doing and turns out it’s compiling something big which I didn’t expect. I have no idea why, what else it’s going to build etc. In that case I’d often like to stop and investigate. What is the exact build/fetch plan, which caches are used, if suddenly a lot of things need to be build instead of fetched then why?