IPFS doesn’t rely on use of the DHT — you can always connect to a peer directly and it asks existing peers regardless of “who” the main DHT says you should ask.
For the SWH ↔ IPFS bridge, for example, we completely ignored the DHT to start, saying there will just be a dedicated bridge node with a well known address, and one can connect to that.
(I believe one can also get peers from peers, which means that a bandwidth-saturated bridge node could in principle prioritize letting all its wannabe peers know who each other are, so they can act together as a CDN.)
Bottom line is the architecture is very modular, and one can experiment with many different strategies. I think our use-cases (distributing source code from archive, distributing builds) are great ones to test various strategies with too.
I’ve read the RFC, good job! But I have an IPFS specific question, is it possible to use the content raw from the store to plug it to ipfs (this is not recommended by ipfs, but the store being r/o it’s not so bad), instead of adding the files into ipfs and duplicating disk space usage?
We have not tried to do that yet. It is certainly possible in principle yes, but in our implementation we have been more focused on integrating the concepts than tuning performance. The idea is to get a specification / interface we feel good about, and then after that we can finesse the implementation without needing to change the spec / interface.
Just wanted to say, this would be a killer feature IMHO. The amount of problems I wouldn’t have at work, if a newly deployed machine out of the box could just substitute from any nearby machine that has the data. We’ve been mucking around so much with internal substituters and their configuration and also with sharing host stores with VMs. I quiet literally can hardly wait.
I guess content addressing and coordination/planning is the biggest blocker? Or are you having trouble with funding as well?
Is obsidian systems still working on this? I’m sitting here pulling at 200KB/s from cache.nixos.org and just wishing it were over IPFS as another computer in my house had to go through the same thing yesterday
It would also help long-term with the cache storage hosting issue we’re facing now.
I wonder if it’s feasible to have the IPFS daemon+store be also the backing store for the local nix store? Serve up files via fuse? Needs eliding store paths for proper dedupe mind you.
We’re not actively working on it at the moment, but I still have the code, and parts of it are in my immense backlog of open Nix PRs I’m trying to wind down.
How can one get the content-address from the input without building everything first or relying on some kind of third party mapping?
Basically, what I’m trying to understand is: content-address = address of content made by hashing its content. When using the cache to pull pre-built packages, the content is the pre-built package, which we won’t have, and thus we won’t be able to build the content-address.
If IPFS were the only external source and the default, external, binary cache source (cache.nixos.org?), how would one get to the content-address of the pre-built package without some mapping (input address → content-address)?
You can’t, an trusted cache would have to provide the mapping input addressed → content addressed. Once you have the content address, you can fetch from any cache trustlessly.
A substituter that returns IPFS links for the mapping query (is the substituter HTTP API documented somewhere?)
An IPFS handler (IPFS supports a local HTTP server from which things can be pulled IIRC)
probably what was pulled should also be pinned
what was pinned should be unpinned when the garbage collector comes along
How is planning handled in the nix community? This could do with an epic, stories, and subtasks with detailed descriptions of the input, expected outcomes, expected tests, target repositories, and so on, in order for community members could pick up.
I see that Github has Github projects, but I’m not sure whether the community / maintainers use that or some other project management tool.
We use GitHub issues and projects, but Nix maintainers are essentially curators. So unless we have paid projects or lots of free time, someone else has to do the work and we’ll answer questions and make sure everything fits in. In principle that also involves the task breakdown, and @Ericson2314 can surely give thumbs up to that draft such that you can file it as a series of issues that are specific enough get the “good first issue” label.
Thanks @fricklerhandwerk . When you refer to draft, do you mean RFC in one of the repositories? And are these kinds of things discussed in a regular meeting?
Oh I meant your sketch of a plan in the post I was replying to. RFC is quite an involved process, and in this case a technical roadmap for a widely requested feature is a welcome step.
Yes, we regularly triage such contributions. Nudging maintainers also helps to keep things in focus, as the backlog is quite enormous.
Almost everything is implemented, but it is on branches that are now quite old that change a lot of code. Unfortunately, only a very diligent person very comfortable with git and C++ refactors will be able resurrect that code while taking less of my time than it would take for me to do it myself.