My system configurations are stored in a git repository, but I use copies of those files inside /etc/nixos to actually manage/build those systems.
I’m currently trying to rebuild a server while trying to bisect a possible nixpkgs issue and since they are large rebuilds I do the build on another server
Before running nixos-rebuild switch on the remote have you made sure that you are on the same commit of the flake on both hosts, and that it is not dirty?
And why are you not using nixos-rebuild switch --target-host?
You can additionally use —build-host to build on a different host. Evaluation is still done one the host you actually start nixos-rebuild on. I do this to develop on my laptop, build on a server and deploy to e.g. a Raspberry Pi.
The changes to my git repo were not committed, so the repo was definitely dirty (but all files already exists).
But I would expect that to maybe affect the metadata of the build and cause some small rebuilds of static files, but it started to rebuild some 600 packages, starting with llvm as a huge dependency and I don’t see why that would be necessary for a configuration that has identical .nix files, just in a different location. The question was born more of theoretical interest than anything else.
--target-host could indeed have worked, but not --build-host because I can ssh into the target host from the build host but not the other way around.
Still don’t understand why “they saw different things” happens if the contents of the files are the same on a file content level (i.e. diff reports no differences at all) but especially after eval I don’t see what kind of differences would still be inferred - and that’s what I’m curious about.
The dirty git repo is the source that didn’t cause any rebuilds, building from /etc/nixos did.
Share what exactly you built. Full expression, no excuses. And for any non-flake related build, share exact environment with all the possible impurities that might be considered.
Thanks for the response, but I’m not comfortable putting my system configs online.
I was hoping for a more high-level theoretical answer to the question why nix cares about the location of my files, but I’m not curious enough to put more effort into this (or to let others put effort into digging into this particular case). There are no non-flake builds or old school nix commands involved.
And as I just remembered a discussion from yesterday I had at another place, you seem to move the runtime closure over and then try to re-evaluate for activation on the receiving end.
It could be, that you have something that causes IFD, and therefore some parts required for evaluation have to be built, even though the runtime closure is actually completely available, but nix can’t verify without building the parts that are missing for evaluation first.
If you still get a build with IFD forbidden, you have a proof that I am wrong, if you get an evaluation error that nix can’t proceed due to IFD, it is a strong hint for that I am on the right track.
Thanks - I was pretty sure I wasn’t using IFD but I’ve taken the opportunity to set allow-import-from-derivation = false; in my shared config. All my machines still build fine.
The two locations are independent from each other in terms of links and such; I have a script that copies the files around as needed.
And diff <git repo>/nixos/configurations/<server name>/ /etc/nixosreturns 0.
My reason for having this setup is that I don’t like having things like configuration related to my local network present on remote servers, and with this setup I can just copy a subset of files and none of the git history to deploy a new configuration. I’m sure there are better ways and tools to do this by now, but it’s my own legacy setup and it works for me (again: I’m not looking for a “solution” anymore - I don’t expect remote rebuilding a complete system for a bisection to be a regular occurrence and I a have a workaround - it’s just a theoretical interest).