I’m delighted to be able to announce that users all around the world will now have a great experience when fetching from the NixOS cache.
The NixOS cache is now being served from all of AWS CloudFront edge locations, significantly reducing latency for users in Asia, Africa, South America, and Oceania.
I’ve just spun up a new test environment in AWS using nixops and the process has gone from:
deploy
wait…
watch to see downloading started…
go away get a coffee or two
come back to find it is ready
to something that is workable to stay and watch/demonstrate to others.
The first test I did was certainly faster for common packages (cache downloads zoomed past), then some of our other parts were slower. Unfortunately I didn’t time it.
The second test was noticeably faster than the first and took 3 min 40s (not very scientific, just using time and including building all machine configurations and copying our local closures up (~200mb). I guess this is your hot cache scenario from your blog post.
We had previously been using deployment.hasFastConnection as it was quicker to upload the closures than to download them from the cache.
The cache now got an order of magnitude slower for one of my locations in CZ. I used to have ping 5 or 10ms, now it’s >100ms and limited to 10Mbps, due to being routed all over the world I suppose I’m not their customer myself, so I can’t contact them directly?
Apparently their DNS geo-steering is broken for my IPs. I’m getting much faster response if I ping IPs resolved by public services around here. My DNS resolver asks Amazon’s authoritative servers from the very same IPs, so this isn’t about having to set up EDNS client subnet on my side or something.
traceroute to cache.nixos.org (2600:9000:2018:400:10:dfa4:b0c0:93a1), 30 hops max, 80 byte packets
3 2001:718:0:600:0:135:136:10 (2001:718:0:600:0:135:136:10) 7.058 ms 7.050 ms 7.146 ms
4 20ge1-3.core1.prg1.he.net (2001:7f8:14::6e:1) 4.650 ms 4.693 ms 4.687 ms
5 100ge16-1.core1.fra1.he.net (2001:470:0:213::1) 12.306 ms 12.194 ms 12.235 ms
6 100ge6-1.core1.ams1.he.net (2001:470:0:2d4::1) 36.517 ms 36.362 ms 36.061 ms
7 100ge8-1.core1.man1.he.net (2001:470:0:431::2) 26.037 ms 26.187 ms 26.387 ms
8 100ge8-1.core1.dub1.he.net (2001:470:0:410::2) 34.516 ms 34.454 ms 34.495 ms
9 100ge5-2.core1.nyc5.he.net (2001:470:0:440::1) 89.987 ms 89.981 ms 90.047 ms
10 100ge4-2.core1.nyc4.he.net (2001:470:0:20a::1) 91.093 ms 91.070 ms 91.114 ms
11 2001:428:601:e::1 (2001:428:601:e::1) 89.845 ms 89.862 ms 89.875 ms
12 2001:428::205:171:202:160 (2001:428::205:171:202:160) 107.806 ms 107.919 ms 107.914 ms
13 2001:428:1802:10:0:7:0:2 (2001:428:1802:10:0:7:0:2) 122.499 ms 122.580 ms 122.527 ms
14 * * *
15 2620:107:4000:ff::57 (2620:107:4000:ff::57) 110.089 ms 110.083 ms 110.023 ms
16 2620:107:4000:ff::2b (2620:107:4000:ff::2b) 111.451 ms 111.423 ms 111.375 ms
17 2600:9000:eee::63 (2600:9000:eee::63) 109.917 ms 109.927 ms 109.910 ms
18 2600:9000:fff:ff00::300 (2600:9000:fff:ff00::300) 112.169 ms 113.407 ms 113.388 ms
19 2600:9000:fff:ff01:0:50:102:11 (2600:9000:fff:ff01:0:50:102:11) 113.355 ms 2600:9000:fff:ff01:0:50:105:11 (2600:9000:fff:ff01:0:50:105:11) 112.989 ms 2600:9000:fff:ff01:0:50:103:11 (2600:9000:fff:ff01:0:50:103:11) 113.494 ms
20 2600:9000:fff:ff00::401 (2600:9000:fff:ff00::401) 111.912 ms 111.882 ms 111.867 ms
21 * * *
traceroute to cache.nixos.org (13.33.165.13), 30 hops max, 60 byte packets
3 195.113.235.99 (195.113.235.99) 8.319 ms 8.280 ms 8.308 ms
4 ae0-998.praga2.pra.seabone.net (195.22.215.228) 4.546 ms 4.622 ms 4.551 ms
5 racc.franco33.fra.seabone.net (195.22.211.205) 16.283 ms 16.248 ms 38.857 ms
6 ae-5.r04.frnkge08.de.bb.gin.ntt.net (129.250.66.17) 23.630 ms 20.644 ms 16.376 ms
7 ae-24.r24.frnkge08.de.bb.gin.ntt.net (129.250.3.217) 24.996 ms 20.610 ms 23.812 ms
8 ae-4.r25.nycmny01.us.bb.gin.ntt.net (129.250.2.4) 100.957 ms 100.699 ms 102.916 ms
9 ae-1.r20.chcgil09.us.bb.gin.ntt.net (129.250.2.166) 146.411 ms 146.948 ms 152.818 ms
10 ae-6.r07.chcgil09.us.bb.gin.ntt.net (129.250.6.61) 150.013 ms 150.773 ms ae-12.r08.chcgil09.us.bb.gin.ntt.net (129.250.2.190) 151.627 ms
11 ae-1.a01.chcgil09.us.bb.gin.ntt.net (129.250.5.94) 143.799 ms ae-0.a01.chcgil09.us.bb.gin.ntt.net (129.250.4.218) 152.083 ms ae-31.a00.chcgil09.us.bb.gin.ntt.net (129.250.3.159) 161.577 ms
12 ae-1.amazon.chcgil09.us.bb.gin.ntt.net (129.250.201.94) 115.938 ms 119.696 ms 119.712 ms
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 server-13-33-165-13.ord50.r.cloudfront.net (13.33.165.13) 121.669 ms 115.558 ms 117.303 ms
That’s a different network and a different ISP. For the routing issue to/from you, I contacted the ISP on Friday morning, as it seemed still broken the same way (no response so far). In that network my ping to cache.nixos.org is < 10ms.
EDIT: fortunately my hydra.nixos.org slaves are in this network where cache.nixos.org is fast (only {t4a,t4b}.cunat.cz ATM, hopefully also t2m soon).
So, they apparently special-cased my router’s IPs (which I sent trace from), but my actual machine with NixOS seems unaffected by the change :-/ EDIT: http://ix.io/1iKy
What about fixing larger ranges? For IPv4 whois says the ISP has the whole 195.113.0.0/16 block (they should be all in CZ). For IPv6 whois says 2001:718::/32 but please at least the /64! I would think that Amazon technicians would figure out by themselves that /64 is co-located on the same continent, but apparently not.
EDIT2: I was reminded that that there’s a DNS catch – authoritative nameservers for fastly.net zone have no AAAA records, so you still need to use a resolver that has IPv4 access (or forward to one such). Fortunately, that can be worked around by using one of the good public resolvers that are available nowadays.
Not a problem (for myself). I just noticed this when re-checking my routing problem (above) that regressed again during the past several days, and I was surprised it’s now it’s good again but not even Amazon.