Query build artifacts in /nix/store or in a nix-native CI service


Querying build artifacts across the /nix/store or in a CI or related service like Hydra, Hercules or Cachix is not well documented and it is unclear if it is already supported in a nix tool. However, I think it would be useful for the community, especially for direct interoperability with new services like Grafana.


What is the idiomatic way for taking build artifacts like json files generated during a nix-build and either:

  1. adding them to an external database (like influxdb) or
  2. querying or exposing them to an external data ingest (like grafana)?

Instead of opting for side effects, I originally opted for having builds be handled completely by hydra. Sander explains that you can explicitly set files as hydra build products like:

mkdir -p $out/nix-support
echo "file binary-dist \"$(echo $out/*.apk)\"" \
    > $out/nix-support/hydra-build-products

here so that you can download them directly by hand through Hydra’s frontend web app.

Since hydra has a postgres service backend, I planned to directly query this postgres instance from the same machine with a grafana service for all previous hydra-build-inputs of a package that have been generated with nix-build. This assumes these hydra-build-products are added to the hydra’s postgres instance for use with hydra’s frontend web app, such that postgres can be queried directly, but maybe this is an incorrect understanding?

While setting this strategy up, there was a key issue with hydra which I patched in the Perl code but overlaying the nixos module for hydra has been difficult (a separate issue from this post).

So I am wondering if there is a minimal / simpler solution or tool to extract build artifacts across a nix/store.
Should I add a post-install phase to my build phase which adds these json files to an external system database directly? Or is there an idiomatic way of surgically getting such build artifact jsons out of /nix/store/ that comes to mind?

Hi @kdmarrett,

This is interesting! What kind of queries are you interested in? I’m trying to figure out your use case.


Hey @domenkozar thank you for the interest!

This is for an application in large-scale neuroscience. As with parts of the computational physics community (Chris Burr’s talks about CERN), the scientific software is generally FOSS, highly collaborative, and must be highly reproducible (hence a nix-native CI). Importantly, the science itself is often limited by the performance and scalability of the software, they are therefore a component of correctness and there can’t be regressions on these metrics. The use-case is not just CI for correctness, but also continuous benchmarking as done in this or this paper.

Since performance is not a binary pass/fail, it is best visualized across time ideally tied to commits hence the need for Grafana as an additional UI. Grafana must ingest data from queryable sources, this is typically a SQL database or a http server requests–Grafana can’t pull directly from a filesystem and you can’t simply pipe data into it from the command line. As a rudimentary architecture to deal with this, we can take the nix store path printed from nix-build, find the json files in that directory, add those json files to InfluxDB via telegraf, then query the InfluxDB from Grafana. However that strategy is fragile, indirect and also a bit redundant considering Hydra already manages state with a postgres instance and HerculesCI also probably has a queryable database that Grafana could query directly.

If Hercules or Cachix already has the architecture or API to do this directly it would definitely be a futuristic feature!

Have you considered using a post-build-hook to check whether the produced $OUT_PATHS contain your files of interest and upload them in case they do? It’s actually quite similar to checking the output, except this blocks after each build is finished.

How many json files are we talking about?

About 5 years ago I worked for Snabb and did similar at https://github.com/snabblab/snabblab-nixos/blob/259ef17a634c5ad17fe130821babd11803c92153/jobsets/snabb-matrix.nix

It ran a bunch of different machines, different software versions, different linux kernel versions and ran benchmarks on hardware with certain networking cards. Then at the end report derivation would depend on all those derivations, get the generated CSV and craft a report.

If you have reasonable amount of json files (that number depends on resources) you could try this approach, but maybe you have additional constraints that require more real-time handling?

@domenkozar 2-3 json files for now and the CI infrastructure itself does not have to be particularly performant yet, as it is currently mainly an offline batch job by a few users. The software runs themself will take the vast majority of time (up to 24hrs). The CI infrastructure should however be able to be made aware of build artifacts, and have those build artifacts queryable. The queries would apply to different builds of the same package and ideally the database would also have commit hash and timestamp for time of build. As you can see the needs of grafana are different than the queries available with for example, nix-store ...

The snabb-matrix would be useful at scale thank you, it actually goes beyond our current CI performance constraints.

@FRidh This is a useful tool in this situation thank you for mentioning. It can be used for a simple solution that does:

github action → nix-build (produces jsons) → post-build-hook (send to telegraf) → telegraf sends to influxdb → grafana queries influxdb

The alternate possible solution would be:

hydra build latest commit (produces jsons) → grafana queries hydra’s postgres