Improvements to cache.nixos.org (help test the new config!)

Hello *,

In recent memory there have been some issues with the official binary cache, and I’ve been working on improving things. I’m happy to say I’ve got a working “mirror” of the cache that can serve as a testing ground for several improvements I’m making to our Fastly configuration, and I’d like it if users would give it a try.

TL;DR: set your default substituter to https://aseipp-nix-cache.global.ssl.fastly.net, and that’s it! This uses the upstream S3 bucket as a backend, and therefore you do not have to change your trusted keys, and it has all channel builds!

Visit that page (click the link) for more info, a changelog, etc.

There’s also a wiki page describing the state of play at the moment, including known bugs, and more specific issues, and possible future plans. Please take a moment to read it over: Maintainers:Fastly - NixOS Wiki

The source code (mostly a VCL configuration, and some HTML files) are not yet public. Stay tuned for that. (Nothing private, but I would like to consolidate it under the nixos organization, which will require someone making the repo.)

Check the changelog for some details about the improvements that were made: https://aseipp-nix-cache.global.ssl.fastly.net/changelog


This is basically an internal overhaul to the cache configuration which has a much more sophisticated implementation, written in VCL. In short: the cache configuration is now a program, more or less, so it should be easier to extend and customize in the long run. It enables several features we weren’t taking advantage of, including various performance enhancements. (You can find details on the wiki and the changelog.)

The goals of this first round of improvements are:

  • Improve performance, latency and cache hit ratio for all objects (large objects that are not in the cache, repeated 404s, etc).
  • Implement far more aggressive caching policies across all objects. narinfo files and manifests are cached for a month, while actual packages are cached for 1 week.
  • Some better logging infrastructure so we can investigate and watch cache usage, growth, etc.
  • Isolate some issues users have been having – including IPv6 errors that seem to have been causing user issues.

Most of this has been accomplished, so I’d appreciate some testing and tweaking.

If you are using IPv6+IPv4 and have been having issues, I would especially appreciate you testing this. For now, the above URL is IPv4 ONLY, but there is a separate IPv4+IPv6 dual stack address you can test as well, with HTTP/2 support. The wiki has details: Maintainers:Fastly - NixOS Wiki

Please let me know if IPv6 is problematic for you, and we can dig into more details if so.

I would also appreciate any feedback from users who hit the cache from Asia/Australia for example. Previously, these users would normally connect somewhere like Singapore/Hong Kong, which would then later connect all the way to us-east-1. This would require a very large TCP handshake across the globe, making latency quite bad, especially in the case of many small misses (e.g. for many narinfo files which do not exist, which would effectively be a worst-case.)

This experience should now be significantly improved – both the official cache.nixos.org server has seen several improvements since last week, but the Beta cache server should improve things even further (I hope).


Please be gentle with this service. It’s a personal account of mine, so don’t set it as the substituter for your CI machine, if you can. But feel free to use it directly via --option, on your laptops/workstations, etc. It should be stable, but occasionally I might break something, so please report that.

If you have issues, please feel free to report them here, or contact me on IRC.

8 Likes

Thanks for doing this @thoughtpolice!

I have pinged a couple of people who have had issues with the cache in the past, hopefully they will reply with feedback in this thread.

The http/2 enablement is only between Fastly and AWS and it had been enabled towards clients already?

No, I should have clarified further: first off, HTTP/2 is NOT supported for AWS S3 connections, in any way (you can use CloudFront to “add” HTTP/2 to an S3 bucket but that’s completely different, and similar to any other CDN with HTTP/2 support). Fastly does have HTTP/2 support – but for for clients only, not your origin servers.

Second, the improved latency that users should see on cache.nixos.org does NOT come from HTTP/2 in any way, shape or form. (In fact, the default beta server DNS name does not enale HTTP/2 at all, you must opt into the HTTP/2 + IPv6 DNS name.) Rather, it is due to a feature we have enabled called “Shielding”. Here’s the basic idea: you choose one POP very close to your origin, and all other Fastly POPs talk to that one – a kind of “two level” caching. This has a number of cache benefits, but the latency benefit is from something like this:

Let’s say you’re in Asia and you connect to Singapore (“SIN”). This is a TCP connection, which requires a 3-way handshake. Let’s say a single hop takes 5ms. Therefore, the total time for a handshake/connection start is 15ms to SIN. Next, SIN has to connect to the AWS bucket – which is located across the world near Washington. Let’s say the hop latency is 200ms between SIN and Washington. That means you’re paying cost of (3*5ms) + (3*200ms) = 615ms for TCP handshake before you can even send the first byte to the client. In the case of a 404, this is nearly a worst case: the object will not exist, and the content returned is 3 bytes long (the words “404”) plus headers, so the latency actually dominates the cost of even sending the reply (which, once the link is established, will only take 200ms+5ms = 205ms to travel across the globe)

Shielding changes this. Now, SIN does not connect to S3 directly, it instead connects to another Fastly POP, located in washington (“IAD”), and this POP is the one that talks to S3. Why does this help? Because the link between POPs is optimized: notably, Fastly uses pre-established TCP connections to talk between POPs. This completely changes the handshake equation.

Given the prior numbers, what happens now is this: 15ms for a TCP handshake to SIN. SIN talks to IAD, but that is pre-established, so there is no handshake. That means the link between SIN and IAD is now effectively a constant 200ms. Finally, IAD must talk to S3, but this is so close the 3-way handshake is trivial (let’s say 10ms). The new cost is this: (3*5ms) + (3*3.3ms) + 200ms = 225ms to first byte. That’s a reduction in latency by two thirds: a huge boost.


The beta service has a few other tweaks which should help latency, especially latency for cold objects. But Shielding is why users in e.g. Asia or Australia should see vastly improved latency, even with the official cache – it was deployed last week.

I hope this clears some things up.

1 Like

Thanks, I see. I already knew the concept… I read some companies have PoPs that do basically nothing but decrease the handshake latency and they have only fewer bigger data-centers to do the real work.

Still… what I meant is that I believe nix can probe for multiple paths “at once” over a single connection, which can’t be efficient wrt. latency until http/2 IIRC.

I’m afraid I’m a bit confused by what you mean. libcurl does reuse TCP connections by keeping some in a pool, but this is distinct from features like HTTP keepalive (which we do not use at all, to my knowledge). My understanding is Nix simply asks curl to fetch multiple paths repeatedly, and this pooling is transparent. This configuration clearly improves latency in the general (HTTP 1.1) case, and especially our case since we do many rapid requests, as it avoids a handshake for each. It is probably not as efficient as HTTP/2, of course.

But on top of that, Nix does not really do HTTP/2 multiplexing at all, to my knowledge; reading the source it seems to ask for the HTTP/2 protocol version via an upgrade request, but I don’t think this implies actual multiplexing requests over one connection, which requires a separate option. That probably would boost latency further, I imagine, but would require someone to test it.


In any case, the beta cache and the normal cache are almost functionally equivalent, as far as all this (HTTP2 vs 1) goes. The vast majority of the latency from the cache until recently was not from any HTTP related shenanigans like multiplexing or pipelining – it was purely configuration related, because A) large objects had an incredibly high TTFB if they were not in cache (nothing to do with HOL blocking,), which would “hang” nix build and friends, and B) 3-way handshakes over long links introduced atrocious latency. Both of these have been fixed in the beta and that is where all the benefit comes from.

There probably is some further improvements we could make using multiplexing/pipelining, etc, and I’m betting they could be noticeable. But in the grand scale it’s probably still smaller than these two things combined, by a long shot.

If I’m being misleading or missing your point entirely, please let me know, but in general HTTP/Nix-level improvements are not the focus of this first round of fixes.

I only had a vague memory of adding http/2 support, confirmed by a quick search: Enable HTTP/2 support · NixOS/nix@90ad02b · GitHub – in particular the commit message mentions not only connection reuse (which works with 1.1 as well) but parallelism as well.

I’d assume most of its advantage might be for the initial narinfo requests for clients that are further away, but it wouldn’t be good to spend much time optimizing just based on such simple guesses anyway.

Thanks for the improvements and explanation.

I am in China, and wondering if that is the cause of the following problem:

This is a post from 2019 that got necrobumped… I also only realised after reading all the docs :sweat_smile:

2 Likes