Incremental updates

right now updating my nixpkgs (nix flake update nixpkgs) seems such a big leap it might end up failing for various reasons, resulting in (by the logic of atomic changes) reverting to where i’m at.
it would be nice if i could update incrementally instead, e.g. while i go off to sleep have it try and update my nixpkgs by one day or a thousand commits at a time (until failure, bonus points for retry).

has anyone perhaps figured out a way to achieve this more easily?

P.S.:

to update to a specific revision there seems to be e.g.:

nix flake update --override-input nixpkgs github:NixOS/nixpkgs/<rev>

finding commits by time appears possible as well, using e.g.:

git log --since='2 weeks ago'
1 Like

You can use a local git checkout of nixpkgs and use git-bisect while building your flake with override-input pointing to the local checkout. (You don’t need to run nix flake anything for this idea, nix build also accepts this flag inline.) This will reduce some of the network burden and some of the conceptual burden, especially if you have a suspicion about what topic/area are generating your new errors.

1 Like

This is a good suggestion, and to help make it easier, there’s:

https://git.qyliss.net/hydrasect

2 Likes

hm, i feel like the more common error here might be network-related. in that case, bisecting might not be the best fix, if that still tries commits so far you might not make it.

I’m a little confused, and I think it would be helpful to better understand your concerns and situation: how far behind is your flake lock? is it a development project or a system (NixOS) flake? what kind of failures are you anticipating?

Updating the flake lock itself is a small operation: it will fetch some metadata and update a revision. That will then cause a new copy of the input at that revision to be fetched (~40Mb tarball).

Then you start to build, and hopefully most of what you need is cached.

I had assumed from the original question you were concerned about breaking changes: new versions of libraries you use, etc, that you’d have to adapt to along the way. So bisecting is kind of a way of speed-running the various breakages and updates and adaptations you might have otherwise encountered progressively, without having to deal with them all at once.

However, if you’re instead concerned about download failures because of flaky / slow internet, then bisecting revisions is not what you want: that will likely try to download much more, because intermediate revisions will have rebuilt packages many times over. You want to update the input to the desired target revision, and then just let nix retry downloading in the case of failures.

It will retry a number of times by default, and there are some settings you can use to tweak that (notably, connect-timeout, download-attempts, maybe http-connections and keep-going).

But most importantly, if it does fail no harm is done, you can just retry the build and it will use what was fetched last time and try again to download the rest. Just make sure to disable automated store garbage collection in the meantime, and you can keep fetching for days (if you have to) until everything is downloaded or built.

The atomic changes happen when switching once the build is complete.

2 Likes

@uep
my current case is a system flake, now maybe behind on master by just a month or so.
the lock update indeed is fine, the build seems the bottleneck.
issues there seem a mix involving both network-related as well as structural issues.

thanks for the tips about retry options and garbage collection!
i now use the settings:

  nix.settings.auto-optimise-store = true;
  nix.gc = {
    automatic = true;
    persistent = true;
    options = "--delete-older-than 21d";
  };

i’ll admit i’m not sure if that would already clear out the packages built by a recent build attempt.

Yes it will, if the build failed there’s no system generation or result link pointing to the top-level closure as a gc root.

If you leave a build running overnight, it won’t remove parts of a build in progress, but if the build fails before gc runs, then it will. And because you have persistent, it will run on resume in the morning if it suspended overnight.

The “older than 21d” applies to system generation links (old gc roots) that will be removed before the actual gc runs, not to the age of the individual items in the store.

I’m not sure if I’m reading the wrong thing into this comment, but just in case: Are you actually following master, rather than nixos-unstable or even nixos-unstable-small, as your flake input? Because if so, that is very likely part of your issue, since you’ll be building locally a bunch of things that hydra hasn’t had a chance to cache yet. And there’s some slightly higher likelihood you happened to pick a revision that just doesn’t build.

If you really want to be following master, then you probably want to be following it very closely for development or other reasons. If you fall behind like this, you can catch up quickly via -unstable using cached builds, then a shorter step forward from there.

-unstable follows master after a short delay (typically a day or three), during which builds are done and added to cache, and only moves when sufficient tests have passed. Picking one of those revisions will make upgrades a lot easier.

1 Like

@uep that was pretty insightful. thank you for the extensive feedback! :pray:

1 Like