Yes, that’s definitely on the todo list! There is some refactoring that I need to take care of before I’ll get to it, though.
Downloads aren’t proxied and the Attic server just returns a 307 redirect to a presigned URL on S3. Uploads are streamed through the server because compression is handled server-side.
Actually this may be changing with the addition of chunking which requires the NAR to be assembled on the server. Additional caching (e.g., CDN) may be implemented outside atticd
, and the assembly can even be done on FaaS platforms like Cloudflare Workers. From my experiments, chunking can improve the efficiency of storing huge unfree paths (e.g., vscode
, zoom-us
, etc.) by a huge amount.
As an example, I follow nixos-unstable
and zoom-us
(~500 MiB uncompressed, ~160 MiB compressed) frequently gets rebuilt while remaining at the same version. For the 5 store paths of zoom-5.12.9.367
picked from my store chunked with FastCDC at an average size of 5MB (coarse-grained to help reassembly from S3), the common chunks weigh ~470 MiB in total (~160 MiB compressed individually). In this scenario, the deduplication ratio for 5 paths is 0.25x (~210 MiB vs ~800 MiB).
I’m inclined to make chunking the only supported flow to simplify maintenance. Existing NAR files will be converted into a single chunk. Any thoughts?
When doing chunking, please also make sure that there is a way to recover from broken chunks easily. This will reduce the dedup factor, but will save the cache for being unusable because of a single flipped bit.
And now we have chunking! In this new model, NARs are backed by a sequence of content-addressed chunks in the Global Chunk Store. Newly-uploaded NARs will be split into chunks with FastCDC and only new chunks will be uploaded to the storage backend. NARs that have existed prior to chunking will be converted to have a single chunk.
When upgrading to the new version, you need to add the following to the configuration:
# Data chunking
#
# Warning: If you change any of the values here, it will be
# difficult to reuse existing chunks for newly-uploaded NARs
# since the cutpoints will be different. As a result, the
# deduplication ratio will suffer for a while after the change.
[chunking]
# The minimum NAR size to trigger chunking
#
# If 0, chunking is disabled entirely for newly-uploaded NARs.
# If 1, all newly-uploaded NARs are chunked.
nar-size-threshold = 131072 # chunk files that are 128 KiB or larger
# The preferred minimum size of a chunk, in bytes
min-size = 65536 # 64 KiB
# The preferred average size of a chunk, in bytes
avg-size = 131072 # 128 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 262144 # 256 KiB
During a download, atticd
reassembles the entire NAR from constituent chunks by streaming from the storage backend. This means that traffic is proxied through the machine running atticd
.
If you don’t want chunking and would still like downloads to always stream directly from S3, Attic has you covered as well. You can configure nar-size-threshold
to 0
to disable chunking entirely. With this configuration, all new NARs will be uploaded as one chunk. atticd
will directly return presigned S3 URLs for NARs that only have a single chunk.
Some new entries to the FAQ:
Why chunk NARs instead of individual files?
In the current design, chunking is applied to the entire uncompressed NAR file instead of individual constituent files in the NAR. Big NARs that benefit the most from chunk-based deduplication (e.g., VSCode, Zoom) often have hundreds or thousands of small files. During NAR reassembly, it’s often uneconomical or impractical to fetch thousands of files to reconstruct the NAR in a scalable way. By chunking the entire NAR, it’s possible to configure the average chunk size to a larger value, ignoring file boundaries and lumping small files together. This is also the approach casync
has taken.
You may have heard that the Tvix store protocol chunks individual files instead of the NAR. The design of Attic is driven by the desire to effectively utilize existing platforms with practical limitations [0], while looking forward to the future.
[0] In more concrete terms, I want to use Cloudflare Workers for the sweet, sweet free egress
What happens if a chunk is corrupt/missing?
When a chunk is deleted from the database, all dependent .narinfo
and .nar
will become unavailable (503). However, this can be recovered from automatically when any NAR containing the chunk is uploaded.
At the moment, Attic cannot automatically detect when a chunk is corrupt or missing, since it’s difficult to correctly distinguish between transient and persistent failures. The atticadm
utility will have the functionality to kill/delete bad chunks.
This only improves storage requirements on the server side but does not speed up downloading for the client, right?
Currently it does not, but the attic
client could serve as a local server that performs NAR reassembly client-side. We can also extend the Binary Cache API to let Nix itself support chunked downloads, but that would be a much longer process.
But that requires the attic client to cache the chunks, increasing storage demand. Or at least keep a mapping from Hashes to store-paths to reassemble NARs on demand. Or just be optimistic and choose a store path with a similar name (after the hash) to get a reference NAR…
I think the Tvix protocol might suit this case better, as it probably can integrate with the auto-optimise-store machinery.
I’m sure there will be a use case for each of the variants, depending on client storage, bandwidth, etc.
Maybe even a multistage solution with a Tvix/Attic bridge in the middle.
How is the chunking performed? Just by byte-count on the (uncompressed) NAR stream?
One of the benefits of single-file-based downloads is that basically all unchanged files dedup properly. Without this, a single-byte content change or length change will permute all later stream content and prevent successful dedup.
If the chunking protocol were aware of file boundaries in the NAR stream, and chose chunk boundaries to align with those, those boundaries become a point where the offset can be corrected and successfully dedup following identical content.
It’s Content-Defined Chunking based on a rolling hash (see the FastCDC slides for a quick overview), and it solves exactly the boundary-shift problem that would make Fixed-Size Chunking useless (adding a byte to the beginning or somewhere in the middle).
A couple of new things:
-
attic watch-store
is here. It tries to batch paths together so the number of expensive operations (computing closures, querying missing paths) is minimized. - When uploading a path, the .narinfo will now be uploaded as part of the PUT payload if it’s larger than 4KiB. This makes Attic more usable behind reverse proxies with header size limits. The server must be updated to support this.
There may be some API breakages soon to support client-side compression (the NAR stream will still be decompressed and recompressed server-side), as well as a better way to handle API compatibility. Hopefully we can stabilize everything and cut the first release soon.
Hi all, regarding the potential confusion with the Attic backup tool, I’m thinking about renaming the tool to Binca. Posting this to gather any suggestions on the change as well as other alternative names
Personally I like attic
, as that is the place where I put my stuff and can forget about it.
Binca has no immanent meaning to me, and it took me quite a while until I catched that it is just a made up word from “binary” and “cache”.
At least the name clash was not with another software, but with a fish production company from a quick web search…
So I would suggest to either use a nixy pre-/suffix (attix
) or use some other word. trove
, as thats where stuff gets found?
Given that the other attic
project hasn’t been maintained in eight years, why not just leave it as is?
There is no name clash in nixpkgs
and package names are usually taken on a first-come first-served basis in nixpkgs
. Imho, attic
is a good choice and I think you can keep it.
Since attic
seems to support separate GC configuration per-cache, it seems like you could get something similar by pushing releases to a dedicated cache without time-based GC.
I think renaming it would be good, since the existing attic
backup tool’s key feature is also content-based deduplication.
From the heading, I also immediately incorrectly concluded that your tool was directly based on the existing attic
.
I like the suggestion “attix
”, that makes it quite clear that Nix is involved.
Given that the chunks are still quite big, and I suspect that many files only change by their inclusion of nix store paths that changed, how about pre-processing all stored files as follows:
The idea is to move the nix store paths (only up to the first part) into a separate list, and remove them from the file. So then you would replace a file F
with a tuple (Fx, L)
. Fx
is the binary contents of the file with every sequence matching /nix/store/[^/]+
removed, and L
is a list of (position, path)
tuples.
This can be encoded in a streaming manner, and decoded in a streaming manner provided you have access to the tuples L.
L
can be compressed better by making position
be relative to the end of the last match, and making path
a index of a list of found paths. So then we get Lrel
being a list of (relPosition, pathIndex)
tuples, and P
a list of paths, so F
becomes (Fx, Lrel, P)
.
This result should be way better at being chunked. I am hoping that many rebuilt files will have the same Fx
and Lrel
, and only P
will differ.
For future-proofing etc, the /nix/store/
part should be configurable.
What do you think @zhaofengli ?
Content-adressable derivations use a similar pattern for hashing modulo self-references.
However, they specifically only look for the hash parts to avoid missing split references (think something like [storePath, "dbeu123...-opencv-2.13.0'].join('/')
). I guess you could do this in this case as well; you’d have to scan for the hashes of all dependencies, record their positions, replace them with a string of zeroes (the “cleaned” version of the file) and store a set of tuples of what hashes occurred in what positions {(h, [p])}
. If you then store a new file in this scheme and the “cleaned” version is equal and the set is the exact same except for the hashes, you have a match and can map both files to the same on-disk representation, just with different maps.
With the self-reference, you’re sure that the string is what you give it to find, but in this case we want to catch as many store paths as possible, and it’s not a big problem if we miss one.
But yes, very similar.