Best Practice to automaticly build all installed Packages to put them into a binary cache

Hey there,
I am new to Nixos and now trying a few things out in a VM before using nixos on my daily systems.
I came from cachyos and the performance advantage it gave me due to optimized binarys.
I found out how to build all my packages with gcc -march=skylake optimizations. I done the build on my server and it worked like I expected.
Now I want to setup my nixos server to build all packages installed on my client automatically with the skylake optimizations and put them into a binary cache.
I know how to use an remote binary cache via ssh but I don’t know how to set up the server to automatically rebuild all packages when there updated.
Is there a standard solution for this problem on nixos?

4 Likes

I came from cachyos and the performance advantage it gave me due to optimized binarys. I found out how to build all my packages with gcc -march=skylake optimizations. I done the build on my server and it worked like I expected.

I’m very curious - what kind of performance differences did you see and for what use cases?

3 Likes

Either setup an automated task with the scheduler of your choice (systemd timer, cronjob, whatever) to pull latest version of nixpkgs and rebuild everything you need. Or you may have to setup Hydra.

I used it a lot for gaming with the rpcs3 and yuzu emulator. But I felt like its improving desktop usage on KDE too. (Not sure if its realy the case)
There are some Benchmarks. Sometimes you get like 10-15 Percent Performance advantage. Sometimes it hurts performance a bit…
https://www.phoronix.com/review/cachyos-linux-perf
https://sunnyflunk.github.io/2023/01/15/x86-64-v3-Mixed-Bag-of-Performance.html

Thank you :slight_smile:
I will test out Hydra. Thats exactly what I was looking for.
I will tell you when I tested it out.

FYI systems/architecture: bump default architecture to x86-64-v2 by SuperSandro2000 · Pull Request #202526 · NixOS/nixpkgs · GitHub

1 Like

in case its helpful for you i got mostly the same setup that you are looking for, here is my flake: GitHub - Shawn8901/nix-configuration

i got my desktop pc (pointalpha), for which i want to have x86-64-v3 package built. For the needed overwrites see the machines hardware.nix.
Then there is another machine (tank), which is also able to build and run x86-64-v3 packages, that acts as hydra instance (and some other stuff unreleated here, i also added the desktop as build instance to speed the build process up, but thats not really needed if the building instance is powerful enough).
The hydra instance is building the packages of the flake and pushing them to a attic instance (any other binary cache should work mostly the same, or you can also just use the hydra instance as a binary cache).
The important config for the hydra instance is in the configuration.nix (around the services.hydra setting) and nix.settings that its capable of building x86-64-v3 packages, that is set by the nix.settings.systemFeatures.

Beside that config a notable change is, that the desktop pc is using a forked version of nixpkgs, that is advanced by the hydra too (and the initial trigger is by a GitHub-Action).
Most of the code from the custom nixpkgs are patches from @Sandro on the microlevel architecture top (so kudos should be sent in his direction). The important ones are the first 5 of that branch. if you compare it to the master or nixos-unstable.

All my systems are configured in that flake.
If there are questions on details, let me know.

If any of my code is useful for you, feel free to copy what ever is helping you.
No guarantees on correctness :slight_smile: and likely some things of my config are overcomplicated.

edit: in case you are building for skylake, in case you dont need the haskellPackages.cryptonite patch, the custom nixpkgs is likely not needed at all, as those code path is already in nixpkgs (i mostly need it as i want to build for microlevel architectures).

Do you have a source for cc-wrapper: add microarchitecture levels · NixOS/nixpkgs@c9fe527 · GitHub ?

I have created a branch based of systems/architecture: add nehalem, microarchitecture levels by SuperSandro2000 · Pull Request #208398 · NixOS/nixpkgs · GitHub with that commit, so you could add it to that PR if you want. On my opinion it should be included into that PR, so that its possible to use the micro architecture by setting nixpkgs.hostPlatform.gcc.arch = "x86-64-vX"; without any additional work.
(i hope that you asked for that :slight_smile: , if not please let me know ).
https://github.com/Shawn8901/nixpkgs/tree/cc-wrapper

Thats also useful to know and probably what I add to the PR when I find time for it.

I was more looking for a source for versionAtLeast ccVersion "11.0". Did you test that only gcc 11 and lcang 12 support those settings or did you look into some doc or man page or so?

Ah, now I understand. That assumption is based on news articles like https://www.phoronix.com/news/GCC-11-x86-64-Feature-Levels
For me the article says that that versions are the first supporting the levels, but I haven’t cross checked the upstream changelogs.

Edit:
GCC 11 Release Series — Changes, New Features, and Fixes - GNU Project (at the IA-32/x86-64 chapter)
Clang 12.0.0 Release Notes — Clang 12 documentation

1 Like

Ah interesting :slight_smile: thank you
There are some code to use for me.

Very good to see that someone is doing this already.
Thank you for sharing youre configs with me :slight_smile:

Didn’t you had problems building all you’re packages with x86_64-v3 optimizations?
I had problems to build a package called libxcrypt.
I looked in the pull request above to and saw that there where excluded packages too.

1 Like

if i remember right libxcrypt and libsecret are packages that i had to rebuild either manually or just restart the hydra job from time to time because they fail on their tests on load on my machines (later fails if all cpus are busy and thats also an upstream issue).

for libsecret i had included a patch from other ppl to change the timeout for tests.
but beside those two packages, the only one i have in my mind, that i currently fail on fixing is hermes-json, that fails on building. (and is used on nix-output-manager, that i was using before) which may fail on mode detection.

beside that on my system configs the other package that needed custom patching for x86-64-v3 was cryptonite and that is included as patch on my custom nixpkgs. all other changes are already on nixos-unstable (so if you are on 22.11 or so, the list could be actually longer).

Haven’t you thought about just excluding these packages from the build optimisations? I mean just as long as there is no better solution.
Thats what I am trying in the moment.
I am trying that with an overlay but having no success…
Is it possible to override the nixpkgs.hostPlatform for a specific package with help of an overlay?

Those two packages fail for me on tests when system is on high load and not due the set arch, so as I am building that anyways not setting the arch will not make them pass and as I have patched nixpkgs on stdenv I have to build that by myself anyways. Additional i am more interested if they fail regularly to patch that for the nixpkgs, so that everyone is having a more stable experience.

Additional context from my side here, i am not using that setup for a very long time, as to collect the knowledge that was needed to set it up like that was immense for me and due the low CPU power I got building also takes a very long time (so the first bootstrap patches took me serval days of patch, compile and fail, the cc-wrapper change i did understanding more about how things work in the bootstrap after doing very hacky stuff before that)

Edit: if you want to build unoptimized packages, i would recommend to add some code to nixpkgs to preserve and unoptimized stdenv (so which does not have set any values in gcc attr, nor arch not mtune) and then replace the optimized stdenv with that in an overlay or so. (and that i kinda similar what I tried a first hacks for the bootstrapping process)

That was fixed a good while ago on nixos-unstable.

libsecret should be fixed in this or the next staging cycle.

1 Like

@Sandro I managed to exclude the libxcrypt package from the march=skylake build. I was able to build it with the march=nehalem setting. And good to hear libsecret gets fixed too :slight_smile:
I had problems with the build of other packages later (whole build ran like 20 houres), but I thing the problem there was the limited storage of my VM.

Now I want to setup the systems on real hardware. After everything else is working, I want to configure my systems in a flake and build the packages I need with hydra like @Shawn8901 does it.

@Shawn8901 I am investigating you’re flake (the hydra part of it) in the moment and thing understanding it mostly.
But I have a few Questions:

  • The hydra.json is giving the information for hydra which packages it has to build right? So it reads you’re configurations and build all packages for all the systems? Even the unoptimized ones on the other machines?
  • Now about the configuration.nix on the “tank” machine: the script behind the advance_branch variable is only needed for notifications not for the build. Is that the case? - So you are using github-actions only to notify about the builds? (I don’t want to host my flake on github)
  • And now the most imported one for me: How does hydra know that it should use the hydra.json file? Is that done via the webinterface or is it something I oversaw in the config. Or is that just a standard file hydra reads when it is configured by a flake.

The hydra.json is giving the information for hydra which packages it has to build right? So it reads you’re configurations and build all packages for all the systems? Even the unoptimized ones on the other machines?

The hydra.json is a declartive jobset definition. There is in code defined what jobsets are inside my project. A reference for that could be the hydra manual here.
I just did not want to declare them inside the hydra instance by clicking more buttons (creating the project was already enough. :slight_smile: )

Now about the configuration.nix on the “tank” machine: the script behind the advance_branch variable is only needed for notifications not for the build. Is that the case? - So you are using github-actions only to notify about the builds? (I don’t want to host my flake on github)

My flake input is advanced by a github action that commits the flake.lock file. The said commit is pushed to a branch “staging”. That branch is build by hydra and if some jobs are passed, there is a release aggregate that runs.
This triggers the said job runcommand. This runcommand does basically just merge staging to main, which then is what my machines do load on nixos-upgrade. (and main is where i also do the smaller config changes which does not result in a world wide rebuild)
With that i have a kindof setup how the channels for nixos to work (having a source branch, let it be built by hydra and if it passes advance a thing to that commit). It should basically gate thate the flake.lock update is not commited to main, if my config is not buildable by hydra,
I hope that kinda explains this overcomplicated workflow.
any other solution that updates your nix flake lock will also do the job. A github action was just my old setup before is started to build with hydra.
(The old setup was to nix flake update in a github action, run nix flake check, build all custom packages, build all machine configs and then upload that to cachix, i stopped doing that as when rebuilding basically all packages neither the github runner is happy with me nor is the cachix free tier large enough to host all of my compiled packages)

And now the most imported one for me: How does hydra know that it should use the hydra.json file? Is that done via the webinterface or is it something I oversaw in the config. Or is that just a standard file hydra reads when it is configured by a flake.
Yeah thats part of the project configuration on the hydra ui. There you can say which repos to be checked out and also define which file is the declarative jobset definition.
In the “best” setup i wanted to do that on each PR opened on github (and merge back as soon as the PR was built) but i did not yet managed to get hydra to pull the code on PR and i just put that aside to solve other things first. :slight_smile:

A very recent change is that i stopped using a runcommand to upload to attic and started to use attic store-watch that uploads the built content as soon as its loaded into the store.

esp. that over complicated staging branch stuff you likely just dont want to do. Likely its enough for you to have some kind of automatic process that runs nix flake update commits the change and then let hydra do its building job.

1 Like