Nixpkgs-update runs nixpkgs-review on every PR

As of today, nixpkgs-update/r-ryantm now runs nixpkgs-review on every single update and posts the results in the PR message body.

Here’s an example PR:

Special thanks to @bhipple, who helped update nixpkgs-review to support this integration. Thanks to the nix-community for their hosting of nixpkgs-update.

29 Likes

That is amazing, I wish I could click “like” twice. What kind of hardware does this run on? What are the resource / runtime limitations?

4 Likes

For the nix-community build box we’re using a Hetzner AX60-SSD instance which is roughly:

  • AMD Ryzen 7 1700X (8-core)
  • 64G RAM
  • 2x500G SSD (mirrored)

You can look at how it’s set up here https://github.com/nix-community/infra.

6 Likes

I’ve quickly skimmed the implementation and didn’t find anything about time limits. So if nixpkgs-update updates a reverse-dependency of chromium, will it rebuild chromium?

1 Like

Currently, yes. we might have to add some limits as we hit issues.

That’s great! If there was a “all checks passed” github label merging the majority of r-ryantm PRs could be done with a couple of seconds sanity checking.

1 Like

That’s great! If there was a “all checks passed” github label merging the majority of r-ryantm PRs could be done with a couple of seconds sanity checking.

A bit more — loading upstream changelog after clicking on the link provided in the description takes some time, too

I also offered to add my upcoming new server build as a remote build machine, it will have:

  • 3990X , 64 cores/128 threads
  • 256 GB ram
  • 16TB zfs raidz + 1TB L2ARC SSD
  • custom water cooling loop (for thermals and longevity)

my home network bandwidth is pretty… meh, I’m hoping that the caching derivation inputs will minimize the need to hit the nixos CDN

5 Likes

@ryantm In the section

245 packages failed to build:

could the bot indicate which of those are already failing in the base branch?

(And write e.g. (also failing in staging) behind each entry?)

1 Like

@ryantm How do I download one of the built dependent packages?

E.g. for this PR how do I download the blender it built?

I tried

nix-build -A blender https://github.com/r-ryantm/nixpkgs/archive/cb856cb6d1dcc1382c7e749f14d0029295d90a08.tar.gz   --option binary-caches 'https://cache.nixos.org/ https://r-ryantm.cachix.org/' \
  --option trusted-public-keys '
  r-ryantm.cachix.org-1:gkUbLkouDAyvBdpBX0JOdIiD2/DP1ldF3Z3Y6Gqcc4c=
  cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=
  '

but it downloaded only the lib of the PR, not the blender dependent.

1 Like

Similarly, in this PR, if I run

nix-build -A nasc https://github.com/r-ryantm/nixpkgs/archive/2078f2a0ee7d4712ddcc723daf5662cf9f039403.tar.gz \
  --option binary-caches 'https://cache.nixos.org/ https://r-ryantm.cachix.org/' \
  --option trusted-public-keys '
  r-ryantm.cachix.org-1:gkUbLkouDAyvBdpBX0JOdIiD2/DP1ldF3Z3Y6Gqcc4c=
  cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=
  '

it builds libqalculate instead of fetching, even though it’s the same output path. Does that need some extra flags?

Possibly. We’d have to look into investigating the status on hydra, I think.

I do not think the we are pushing the nixpkgs-review builds into the cache. I’m not sure our Cachix cache can handle that kind of disk space usage either. I agree that it would be nice to access these builds for testing purposes, so maybe we should look into alternatives.

Are you a trusted user on your system? If you are not a trusted user, you cannot add trusted-public-keys.

1 Like

hydra doesn’t evaluate staging.

But related to this would be my open issues on building the packages on the base branch, and creating a build diff https://github.com/Mic92/nixpkgs-review/issues/85

It’s on my todo list :frowning:

2 Likes

This seems quite important to me. If the machine just spent 2 hours building Blender, I should not have to spend the same 2 hours again to check if the GUI works – that’s one of the biggest benefits of CI running the nixpkgs-review!

It’s probably fine to limit the life-time of those things on Cachix, as they are used mainly for review purposes. For example, a reasonable base assumption would be that after 3 months, we don’t need that output any more. (Even better but more involved, 1 month after merging we don’t need it any more, because likely Hydra has already built it, or there was a problem and somebody investigates it within a month.)

I see 2 good and relatively simple approaches:

  • Rely on Cachix’s

    once reached least recently used entries will be garbage collected

    feature – maybe @domenkozar could comment how well this would work in this case?

  • Expose your build machine itself as a HTTPs binary cache, then we can download those directly from there. Add a GC root for every build, and a cron job to nix-collect-garbage --delete-older-than 90d --delete-old. Add some 10 TB drives for € 19.50/month (I’m happy to chip in if needed), will probably last for a while.

Then the nix-store -r invocation for libcalqulate itself shouldn’t work either, right? But it does.

Alternative (but more expensive): Try to build them from the target branch.

Much cheaper alternative: Tell us whether they already fail on master or nixos-unstable.

I’m happy to sponsor storage for this.

I think LRU cache would work well, it’s just a matter of finding how much storage would be needed - I’m fine just giving it a go and I can run some statistics in a month or two.

12 Likes