The NixOS Foundation's Call to Action: S3 Costs Require Community Support

I don’t see any GC root management there, though? Or is the idea that you have the resulting directory of symlinks per generation and manually remove? (My complaint was specifically about GC; I do have some shell scripts that do almost the right thing for my setup, but Nix is not helping here)

1 Like

I could imagine a version of Marsnix that is implemented in pure nix and creates a gcroot in a single outpath. Just a small tweak of the existing script. All possible. I’ve just tested that it works, although my script currently assumes passing a nixpkgs path, I haven’t created an argument for arbitrary drvs.

1 Like

Is this like Elon Musk’s version of the Twitter blue tick?

1 Like

Hah, I see your point.

On the other hand - you can’t deny that it raised money! :slight_smile:

1 Like

TLDR: please don’t disregard any potential solution just because it’s not instantly a perfect fit, exercised on the BitTorrent argument.

Long version:
I dislike the “BitTorrent does not share the guaranteed long-term storage” argument, since it is wrong as long as the the seeding node(s) keep(s) the files available. Since the initial seeders are controlled by the nix foundation it’s their decision. This is the same as the decision to GC or not on S3/…

Pros:

  • RE Storage & CPU: Modern machines can store and serve a lot of torrents at the same time for little money. The underlying filesystem can potentially compress/deduplicate raw data transparently
  • RE Traffic: Outgoing traffic also doesn’t have to explode. Applying aggressive bandwidth limits for the individual torrent on the seeding node after a few downloads should reduce traffic by a huge amount.
  • Heritage: files are available (potentially faster than today) for currently popular packages and slow for unpopular packages. But any independent/private mirror helps out with almost zero need for coordination.
  • Redundancy: Rent a similar machine and storage on the other side of the world.
  • Can keep current distribution model in parallel, switch gradually or only parts of the data.

Cons:

  • Performance follows mainstream usage, so odd packages will suffer → any company/sub-community can set up a custom BitTorrent mirror for important packages independent of upstream.
  • Would potentially reduce cost savings via the currently cheap CDN
  • Sounds illegal to the uninitiated and some ISPs might be jerks.
10 Likes

There’s definitely valid arguments here, but I think long-term it would be better to have something like IPFS seeding combined with a smarter nix-store that works with deduplicated blocks instead of with nar files.

I’m confident that this would result in a severalfold reduction in stored size and bandwidth, and hosting doesn’t even need to happen in the cloud (like with bittorrent).

(note that due to the store path hashes, lots of duplicate blocks aren’t detected by sliding window dedupers. So the store paths need to be removed entirely (so variable length doesn’t matter) from the files before deduplicating, and restored after reduplication. See here for more.)

4 Likes

I really wish I had more to offer than I do. I don’t know what a solution is going to look like. However, I know what is not a solution, and it is using too much energy in these discussions.

BitTorrent is not a solution as it simply does not address the problem in any meaningful way. BitTorrent addresses distrobution, not storage. Look at @balazs.lengyel answer just above me. He suggests that Nix maintain a seeder. The only way to do that is to have a copy of the seed. Thus you are not addressing the storage issue. You are only addressing a distrobution issue.

I think where people went sideway suggesting BitTorrent may have gone like this:

  • Come up with a buzzword that sounds good - decentralization
  • What is a good decentralized solution? - BitTorrent
  • Suggest BitTorrent as a solution

The issue with that is of couse if you have to maintain a seed to ensure the item remains available, you now have to store both the stuff to be seeded, and the torrent metadata itself. In otherwords, you actually increase your storeage requirement rather than reduce it.

While BitTorrent is a great solution to the problem of distrobution, that is not the issue at hand. Storeage is the problem that really needs a solution.

5 Likes

How does this address storage? A cache is a temporary copy of something that must already exist. It does not in any way releave the need for Nix to store the original. You are providing a distrobution solution to a storage issue.

1 Like

The way I see it, the main problem is actually three problems.

  1. Storage.
  2. Bandwidth.
  3. Growth.
    Currently, cache.nixos.org has to handle all this by itself and is charged by AWS.

Regarding bandwidth:
Using a peer-to-peer system allows for the distribution of bandwidth across multiple systems. This directly reduces the bandwidth load on cache.nixos.org and therefore reduces costs.

Regarding growth:
As the Nix community grows, so does the demand for the cache. This is a direct relationship and, if left without a solution, could cause failure in our success. Ballooning costs are not what anyone wants to see. But with a peer-to-peer system, as the Nix community grows, so does the cache as more and more people opt in to share their resources.

Finally, to storage:
The cache is not a life-and-death storage system for the Nix community. If a Nix system needs a package which is unavailable at cache.nixos.org, it can build it itself. This is how it comes into existence. However, it is generally faster to download a pre-built package from the cache than compile everything from the source code.
The cache is also very dynamic and is subject to change in packages that are in demand by the community. And the more a cache entry is in demand in a peer-to-peer system, the more copies will be kept on many nodes/mirrors. The more nodes/mirrors holding a package, the more bandwidth is now available for that package.
And visa versa, as a package becomes obsolete, the less nodes/mirrors will hold copies of it until no one holds any copies. This is called organic garbage collection. This directly reflects the community’s needs, instead of the Nix Foundation guessing what package should be garbage collected and which to keep.
Also, a package can return from the dead by nodes/mirrors rebuilding it and sharing again. This is also dynamically and organically done according to the community’s needs.

EDIT: I forgot to mention that all cache.nixos.org needs to maintain is the hashes of builds to verify a build out in the peer-to-peer system is valid. And a hash does not take up much storage. So it is not a pure peer-to-peer system, rather a hybrid validator/peer-to-peer system. This will require the Foundation to choose a solution officially. There are already a few peer-to-peer Nix Binary Cache built, but for this to work, the Foundation must ultimately make the executive decision. We don’t want a fragmented solution. (the Foundation might be able to create an OFFICAL API standard that different peer-to-peer solutions can utilize.)

Finally, I believe a peer-to-peer system is a natural solution for a distributed cache system. If we were talking about a much more sensitive and critical part of the infrastructure, I would tend to agree with you, but in this case, I am fully convinced of my point and have yet to see a genuine counter-point that brings me concern.

Yes, many people think and use decentralisation as a buzzword without knowing the tradeoffs. But here, the tradeoffs are all in the Foundation’s and the Community’s favour.

I prefer that the Nix Foundation spend its donations on talented people to help manage the Nix ecosystem instead of raw compute resources. A far better use of actual money donations if you ask me. I am sure we can all agree leadership/management is what Nix really needs to move from niche to mainstream. (Not a bloated cache system)

I appreciate the criticism of my suggestion, which allowed me to spell out my thoughts more clearly. So thank you, @T313C0mun1s7, for your comments. :heart:

1 Like

I think an OFFICIAL working solution is urgent that has direct endorsement from the Foundation. We don’t want a fragmented peer-to-peer system, as that will be troublesome for the community.
Optimization can be looked at later. (unless the implementation is easy to do from the start.)
But I am definitely on-board with the optimization of the cache system!!!
Excellent points.
But for all those who are experimenting with distributed cache systems, I thank you.
You are unofficially named the Nix Scientists! :grin:

I just realized there is already a working group on the task.

If you have more questions regarding a peer-to-peer solution, I suggest we continue the discussion there.

2 Likes

This perspective on storage neglects two important facts:

  1. Most builds’ hashes are computed from the hashes of their inputs. Only content-addressed and fixed-output derivations will be safe to trust from other builders.
  2. Upstream tarballs can disappear, making rebuilding impossible. Recent example: Remove roy mirror by endgame · Pull Request #237693 · NixOS/nixpkgs · GitHub
2 Likes

To adress the non-content-addressed issue, we need a trust db, which maps input hashes to output hashes. It doesn’t matter if the output hash is itself a CA or not, what matters only is that you trust the mapping.

Right now this trust is implicit in the nix binary cache. By moving it to a separate db, each user can have different trust providers, and any binary serving solution can be used.

As an added bonus, you can insert “fake” mappings, like mapping a native compile build input to the output hash for a cross-compiled build. I imagine that would be nice for embedded platforms.

9 Likes

According to @matthewcroughan we have a separate URL for tarballs, and the storage is not that big.

It is sad to see a tarball disappear, but this is life. Sometimes stuff is lost.
We just need to ask ourselves one question: Is cache.nixos.org a cache or an archive?
If it is an archive, then my peer-to-peer suggestion doesn’t make sense at all.
If it is a cache, a temporary file storage, then my suggestion fits perfectly.

But to have it be both a cache and an archive is just scope creep and makes any attempt at optimization impossible. Because a cache requires high bandwidth and dynamic temporary storage, but an archive requires high permanent storage and low bandwidth. We must be clear that cache.nixos.org is not for permanent tarball storage. (Even one of the official solutions proposed was to do some garbage collection on cache.nixos.org. That means data was going to be discarded.)

If a tarball disappears but the source code remains, a new tarball can be created, then verified by Hydra, and then a dead package can be rebuilt. (With new hashes, but we will always have packages needing the hashes updated…)

2 Likes

Yes, I am a fan of hashing both inputs and outputs. I think a centralized trust db is just what we need. An API validator for all publicly shared builds. So that the trust db can be queried by any Nix system wanting to confirm a build’s validity.
A hash repository shouldn’t take too much storage/bandwidth.

1 Like

Using AWS at beginning was mistake. There are much cheaper options out there

1 Like

AWS at the beginning was literally free (for us). It’s as cheap as mathematically possible.

5 Likes

I read a few people suggesting IPFS, if there is any path for it, Scaleway just opened an IPSFS service: /en/ipfs-pinning/

1 Like

Hey Adam,

If you require an official working solution instead of a P2P system to reduce bandwidth cost, then Protocol Labs can help you. I am working on the Decentralized CDN, called Saturn, which has been built by Protocol Labs. We can help you reduce your retrieval cost by around 60% with a very powerful community-run CDN (>2500 nodes with 10 Gbit/s connections globally). As long as your origin files are content-addressed and have a CID, they are automatically available via Saturn if announced on IPNI or DHT using IPFS.
As it seems that your bandwidth cost are much higher than the storage cost, a first step could be to make the origin data content-addressable (you could even keep them on AWS for now) and then retrievable via Saturn.

4 Likes

Absolutely. If you use IPFS and store a build in a CAR file, then all the blocks will be deduplicated and can be automatically served by the Saturn CDN which cache misses to any IPFS or Filecoin servers.

2 Likes