Special thanks to @bhipple, who helped update nixpkgs-review to support this integration. Thanks to the nix-community for their hosting of nixpkgs-update.
I’ve quickly skimmed the implementation and didn’t find anything about time limits. So if nixpkgs-update updates a reverse-dependency of chromium, will it rebuild chromium?
That’s great! If there was a “all checks passed” github label merging the majority of r-ryantm PRs could be done with a couple of seconds sanity checking.
That’s great! If there was a “all checks passed” github label merging the majority of r-ryantm PRs could be done with a couple of seconds sanity checking.
A bit more — loading upstream changelog after clicking on the link provided in the description takes some time, too
Possibly. We’d have to look into investigating the status on hydra, I think.
I do not think the we are pushing the nixpkgs-review builds into the cache. I’m not sure our Cachix cache can handle that kind of disk space usage either. I agree that it would be nice to access these builds for testing purposes, so maybe we should look into alternatives.
Are you a trusted user on your system? If you are not a trusted user, you cannot add trusted-public-keys.
This seems quite important to me. If the machine just spent 2 hours building Blender, I should not have to spend the same 2 hours again to check if the GUI works – that’s one of the biggest benefits of CI running the nixpkgs-review!
It’s probably fine to limit the life-time of those things on Cachix, as they are used mainly for review purposes. For example, a reasonable base assumption would be that after 3 months, we don’t need that output any more. (Even better but more involved, 1 month after merging we don’t need it any more, because likely Hydra has already built it, or there was a problem and somebody investigates it within a month.)
I see 2 good and relatively simple approaches:
Rely on Cachix’s
once reached least recently used entries will be garbage collected
feature – maybe @domenkozar could comment how well this would work in this case?
Expose your build machine itself as a HTTPs binary cache, then we can download those directly from there. Add a GC root for every build, and a cron job to nix-collect-garbage --delete-older-than 90d --delete-old. Add some 10 TB drives for € 19.50/month (I’m happy to chip in if needed), will probably last for a while.
Then the nix-store -r invocation for libcalqulate itself shouldn’t work either, right? But it does.
I think LRU cache would work well, it’s just a matter of finding how much storage would be needed - I’m fine just giving it a go and I can run some statistics in a month or two.