I built a system that turns a GitHub repository into a Nix binary cache using GitHub Container Registry (GHCR) as the storage backend.
link: GitHub - cmspam/nixcache-oci: Self-hosted Nix binary cache via GHCR (OCI registry) · GitHub
No external servers, no databases, no paid services, no limits (for public repos).
I would love to get some feedback to ascertain whether this is something people in the community are interested in using.
The problem
I run several NixOS machines with a lot of custom overrides and out-of-tree packages. Rebuilding from source every time was painful, and I wanted a cache I fully controlled without running my own infrastructure. I used Cachix, but the free plan was a bit limiting size-wise. So I came up with this idea.
How it works
1. Build + detect what’s already cached
You put your flake configuration in config/ in the repository.
GitHub Actions:
- evaluates the flake
- builds everything
- determines which store paths are already available from your configured substituters (
cache.nixos.org, Cachix, etc.)
It uses:
nix build --dry-run
so it respects all your substituter configuration automatically.
2. Upload only what was actually built
Only paths that were built locally (not fetched from any cache) are uploaded to GHCR as content-addressed OCI blobs.
3. Local proxy translates Nix → GHCR
On the client side, a small Python proxy (stdlib only, no dependencies):
- translates Nix binary cache protocol → GHCR blob fetches
- keeps all
narinfodata in a single in-memory index - serves lookups instantly (no per-path network round-trip)
- streams NARs directly from GHCR to Nix
Anything not in your GHCR cache is transparently fetched from upstream substituters.
4. Incremental CI runs (no wasted rebuilds)
Subsequent CI runs again use:
nix build --dry-run
to detect what’s already cached (both upstream + GHCR).
Result:
- unchanged packages are never rebuilt or re-uploaded
- a no-change rebuild of a full NixOS config completes in under a minute
Why GHCR / OCI?
- No blob count limits
- No practical size limits per blob (~10 GiB)
- No need for sharding (unlike GitHub Releases)
- Content-addressed by design (OCI digests map cleanly to NAR hashes)
- Automatic deduplication
- Unlimited storage + bandwidth for public packages
Additionally:
- A single index blob holds all
narinfodata - Proxy serves lookups from memory instead of per-path network requests
What’s included
- Publish workflow (build + filter + upload on push to
main) - Scheduled workflow:
- runs
nix flake update - rebuilds + caches new packages
- configurable frequency (I run mine twice daily)
- runs
- GC workflow to prune old entries
- NixOS module:
- sets up proxy as a systemd service
- configures
extra-substitutersautomatically
- Signing support:
- private key stored as GitHub secret
- public key auto-committed to repo
Client setup (NixOS)
{
inputs.nixcache.url = "github:cmspam/nixcache-oci";
outputs = { nixcache, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [
nixcache.nixosModules.default
{
services.nixcache-proxy = {
enable = true;
repo = "your-username/your-cache-repo";
publicKey = "your-cache-1:BASE64KEY...=";
};
}
];
};
};
}
If you’re running complex NixOS configs and want a fully self-contained cache with zero infrastructure, this approach works surprisingly well. Give it a try, fork the repo, use github runners to build and cache your setup! Feel free to open an issue or a PR if anything.