The Nix cache is now much faster for users outside of North America and Europe!


I’m delighted to be able to announce that users all around the world will now have a great experience when fetching from the NixOS cache.

The NixOS cache is now being served from all of AWS CloudFront edge locations, significantly reducing latency for users in Asia, Africa, South America, and Oceania.

There are some more details here: but the short version is a 40 minute download is now less than 30 seconds!


I must express a big thanks for this.

I’ve just spun up a new test environment in AWS using nixops and the process has gone from:

watch to see downloading started…
go away get a coffee or two
come back to find it is ready

to something that is workable to stay and watch/demonstrate to others.

The first test I did was certainly faster for common packages (cache downloads zoomed past), then some of our other parts were slower. Unfortunately I didn’t time it.

The second test was noticeably faster than the first and took 3 min 40s (not very scientific, just using time and including building all machine configurations and copying our local closures up (~200mb). I guess this is your hot cache scenario from your blog post.

We had previously been using deployment.hasFastConnection as it was quicker to upload the closures than to download them from the cache.

Huge improvement and thank you very much.


The cache now got an order of magnitude slower for one of my locations in CZ. I used to have ping 5 or 10ms, now it’s >100ms and limited to 10Mbps, due to being routed all over the world :sob: I suppose I’m not their customer myself, so I can’t contact them directly?

Apparently their DNS geo-steering is broken for my IPs. I’m getting much faster response if I ping IPs resolved by public services around here. My DNS resolver asks Amazon’s authoritative servers from the very same IPs, so this isn’t about having to set up EDNS client subnet on my side or something.

$ kdig AAAA +short
$ kdig +short
traceroute to (2600:9000:2018:400:10:dfa4:b0c0:93a1), 30 hops max, 80 byte packets
 3  2001:718:0:600:0:135:136:10 (2001:718:0:600:0:135:136:10)  7.058 ms  7.050 ms  7.146 ms
 4 (2001:7f8:14::6e:1)  4.650 ms  4.693 ms  4.687 ms
 5 (2001:470:0:213::1)  12.306 ms  12.194 ms  12.235 ms
 6 (2001:470:0:2d4::1)  36.517 ms  36.362 ms  36.061 ms
 7 (2001:470:0:431::2)  26.037 ms  26.187 ms  26.387 ms
 8 (2001:470:0:410::2)  34.516 ms  34.454 ms  34.495 ms
 9 (2001:470:0:440::1)  89.987 ms  89.981 ms  90.047 ms
10 (2001:470:0:20a::1)  91.093 ms  91.070 ms  91.114 ms
11  2001:428:601:e::1 (2001:428:601:e::1)  89.845 ms  89.862 ms  89.875 ms
12  2001:428::205:171:202:160 (2001:428::205:171:202:160)  107.806 ms  107.919 ms  107.914 ms
13  2001:428:1802:10:0:7:0:2 (2001:428:1802:10:0:7:0:2)  122.499 ms  122.580 ms  122.527 ms
14  * * *
15  2620:107:4000:ff::57 (2620:107:4000:ff::57)  110.089 ms  110.083 ms  110.023 ms
16  2620:107:4000:ff::2b (2620:107:4000:ff::2b)  111.451 ms  111.423 ms  111.375 ms
17  2600:9000:eee::63 (2600:9000:eee::63)  109.917 ms  109.927 ms  109.910 ms
18  2600:9000:fff:ff00::300 (2600:9000:fff:ff00::300)  112.169 ms  113.407 ms  113.388 ms
19  2600:9000:fff:ff01:0:50:102:11 (2600:9000:fff:ff01:0:50:102:11)  113.355 ms 2600:9000:fff:ff01:0:50:105:11 (2600:9000:fff:ff01:0:50:105:11)  112.989 ms 2600:9000:fff:ff01:0:50:103:11 (2600:9000:fff:ff01:0:50:103:11)  113.494 ms
20  2600:9000:fff:ff00::401 (2600:9000:fff:ff00::401)  111.912 ms  111.882 ms  111.867 ms
21  * * *
traceroute to (, 30 hops max, 60 byte packets
 3 (  8.319 ms  8.280 ms  8.308 ms
 4 (  4.546 ms  4.622 ms  4.551 ms
 5 (  16.283 ms  16.248 ms  38.857 ms
 6 (  23.630 ms  20.644 ms  16.376 ms
 7 (  24.996 ms  20.610 ms  23.812 ms
 8 (  100.957 ms  100.699 ms  102.916 ms
 9 (  146.411 ms  146.948 ms  152.818 ms
10 (  150.013 ms  150.773 ms (  151.627 ms
11 (  143.799 ms (  152.083 ms (  161.577 ms
12 (  115.938 ms  119.696 ms  119.712 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19 (  121.669 ms  115.558 ms  117.303 ms


@vcunat: can you try to use directly? According to it’s the underlying S3 bucket for the binary cache.


Something in general seems very weird about routing to your network. For example, it doesn’t work from here.

Maybe you should contact your ISP?


Nope. Ping ~125ms. No AAAA records and the (single) A record has TTL 5s (!) – that’s nauseating for me as an implementer of a DNS caching server.


That’s a different network and a different ISP. For the routing issue to/from you, I contacted the ISP on Friday morning, as it seemed still broken the same way (no response so far). In that network my ping to is < 10ms.

EDIT: fortunately my slaves are in this network where is fast (only {t4a,t4b} ATM, hopefully also t2m soon).


Run my cloud front script referenced from the and send the results to zimbatm to send to Amine on IRC.


Oh, that plain URL serves contents now :heart: and even useful. Sent to #nixos-dev aminechikhaoui.


So, they apparently special-cased my router’s IPs (which I sent trace from), but my actual machine with NixOS seems unaffected by the change :-/ EDIT:

What about fixing larger ranges? For IPv4 whois says the ISP has the whole block (they should be all in CZ). For IPv6 whois says 2001:718::/32 but please at least the /64! I would think that Amazon technicians would figure out by themselves that /64 is co-located on the same continent, but apparently not.


@grahamc: this is changing now? I see: ->

The regression of not having IPv6 is temporary? I’d think that they do support it. We reportedly have some IPv6-only users.

EDIT: proof

$ kdig AAAA +short

EDIT2: I was reminded that that there’s a DNS catch – authoritative nameservers for zone have no AAAA records, so you still need to use a resolver that has IPv4 access (or forward to one such). Fortunately, that can be worked around by using one of the good public resolvers that are available nowadays.


I believe we’re working on this now, I’m thinking we’ll be able to fix it pretty soon. Sorry!


Not a problem (for myself). I just noticed this when re-checking my routing problem (above) that regressed again during the past several days, and I was surprised it’s now it’s good again but not even Amazon.