This is a follow-up of 2023-10-24 re: Long-term S3 cache solutions meeting minutes #1.
Full details can be found in NixOS Cache GC Meeting - HedgeDoc.
- We landed the narinfo parser in Rust in nix-compat. (@edef & @flokli)
- We set up an archeology VM environment in Hetzner infrastructure on a box from @flokli, installed Clickhouse on that. (@edef & @flokli)
- We set up another archeology VM in AWS EC2, in the right region (to avoid incurring costs), under the Foundation hierarchy of accounts. (@edef & @flokli & @zimbatm)
- The S3 inventory parquet files produced by S3 are not incremental, so it’s very annoying to process them in an incremental fashion. We did start doing some scripts that turn these parquet files into a format that’s more suitable for incremental addition of new files (using pola-rs and clickhouse queries). (@edef & @flokli)
- Fastly logs are a bit hard to parse, as they list cache hits/misses on different tiers, and it seems hard to correlate. (@edef)
- Asked @zimbatm for dashboard access to answer the question of how much egress (data rate, requests per timeframe) we have from S3 to Fastly (done by the end of the meeting).
- We are still waiting on @edolstra’s SQLite database being shared with everyone working on that to unblock a couple of questions, including:
- Mapping of channels bumps to store paths (and their sizes) which will enable to narrow down the data rate and requests per timeframe to active store paths.