The NixOS Foundation's Call to Action: S3 Costs Require Community Support

Hi Jos,

Well I saw an article on it; I’ll have to search a bit for it though.
This might stir some interest though:

https://lafrenchtech.com/en/

1 Like

Why not just use Backblaze B2 for object storage? They have an S3-compatible API and their pricing is insanely good: Cloud Storage Pricing Comparison: Amazon S3 vs Azure vs B2 (backblaze.com).

1 Like

For the amount of storage NIX uses, Wasabi S3 would be better suited as their pricing would be $72 per TB per year compared to Blazes $180 per TB per year. Wasabi doesnt charge for egress or API usage.

So, with the amount of storage NIX currently has @ 425 TiB:
Wasabi - $30,549/yr
BB - $76,500/yr

that large amount adds up significantly. And this reply isnt to bash anything, just merely showing the advantage wasabi would have for them as their s3 storage, and even for anyone else needing s3 storage.

I thought that Backblaze B2 cost $5 per TB? At least, that’s what I got from their pricing page. So if you had 425 TiB that’d be $2125/yr? Did I misread the pricing page?

You are correct about the $5 per TB per Month, but you calculated that price for 1 month. The price they give you is per year. So to upload and download 1TB per month for Wasabi, would be $7, backblaze would be $15 or $180 per year per TB. The numbers are obviously based on the data uploaded and downloaded per month which is not going to be a one for one, But this is a good example regardless as touching the data with backblaze costs, where wasabi it will not. They could literally export all 425TiB one month and it wouldnt cost them a penny to do so. Ingress prices always look great, until you add egress and api call pricing, then it adds up.



There’s an update at NixOS S3 Long Term Resolution - Phase 1

3 Likes

If we’re serious about the reproducibility goal, the archiving part is definitely important for the cases where the upstream source disappears or is volatile. Packages can depend on each other, not being able to rebuild one package exactly can have a larger effect.

However, I think it makes sense to distinguish both the archiving and the caching goals, since they might be optimized differently.

For the archiving part, it may also be worth trying to work together with one of the existing archiving projects like https://archive.org.

5 Likes

Or better Software Heritage, now that they’re expanding their artifacts archival. It sounds like it would be a research project of its own to figure out the deduplication etc…

4 Likes

@domenkozar consider adding an EDIT linking to the phase 1 post at the top of this thread

This issue would be mitigated by the recent announcement of AWS waiving egress fees when moving out of AWS.

13 Likes

This is a game changer. This changes the entire equation in my opinion.

4 Likes

Note that the free egress is limited to 100GB/month, if we need more (…very likely…) we have to request from Customer Service first:

If you need more than 100 gigabytes of data transfer out per month while transitioning, you can contact AWS Support to ask for free DTO rates for the additional data. It’s necessary to go through support because you make hundreds of millions of data transfers each day, and we generally do not know if the data transferred out to the internet is a normal part of your business or a one-time transfer as part of a switch to another cloud provider or on premises.

2 Likes