I absolutely agree with this, once the content address of the output nar is known… the distribution of the content can be on anything.
Thanks for the updates on IPFS, seems like the project is progressing.
I absolutely agree with this, once the content address of the output nar is known… the distribution of the content can be on anything.
Thanks for the updates on IPFS, seems like the project is progressing.
At long last, we have have written an RFC to hopefully get this work merged upstream!
To the moon
I’ve read the RFC, good job! But I have an IPFS specific question, is it possible to use the content raw from the store to plug it to ipfs (this is not recommended by ipfs, but the store being r/o it’s not so bad), instead of adding the files into ipfs and duplicating disk space usage?
We have not tried to do that yet. It is certainly possible in principle yes, but in our implementation we have been more focused on integrating the concepts than tuning performance. The idea is to get a specification / interface we feel good about, and then after that we can finesse the implementation without needing to change the spec / interface.
Or, as they say, https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast
Just wanted to say, this would be a killer feature IMHO. The amount of problems I wouldn’t have at work, if a newly deployed machine out of the box could just substitute from any nearby machine that has the data. We’ve been mucking around so much with internal substituters and their configuration and also with sharing host stores with VMs. I quiet literally can hardly wait.
I guess content addressing and coordination/planning is the biggest blocker? Or are you having trouble with funding as well?
Biggest blocker is code review. Also should finish off [RFC 0133] Git hashing and Git-hashing-based remote stores by Ericson2314 · Pull Request #133 · NixOS/rfcs · GitHub but that’s on me.
Is obsidian systems still working on this? I’m sitting here pulling at 200KB/s from cache.nixos.org and just wishing it were over IPFS as another computer in my house had to go through the same thing yesterday
GitHub - obsidiansystems/ipfs-nix-guide: IPFS × Nix Guide was updated 3 years ago
this works on local network, and you need to exchange keys between peers.
Just chiming in that IPFS support for avoiding concerns about source bit-rot sounds amazing and would help me pitch nix
to peers.
It would also help long-term with the cache storage hosting issue we’re facing now.
I wonder if it’s feasible to have the IPFS daemon+store be also the backing store for the local nix store? Serve up files via fuse? Needs eliding store paths for proper dedupe mind you.
We’re not actively working on it at the moment, but I still have the code, and parts of it are in my immense backlog of open Nix PRs I’m trying to wind down.
I should finish [RFC 0133] Git hashing and Git-hashing-based remote stores by Ericson2314 · Pull Request #133 · NixOS/rfcs · GitHub which is the next step for this.
How can one get the content-address from the input without building everything first or relying on some kind of third party mapping?
Basically, what I’m trying to understand is: content-address = address of content made by hashing its content. When using the cache to pull pre-built packages, the content is the pre-built package, which we won’t have, and thus we won’t be able to build the content-address.
If IPFS were the only external source and the default, external, binary cache source (cache.nixos.org?), how would one get to the content-address of the pre-built package without some mapping (input address → content-address)?
You can’t, an trusted cache would have to provide the mapping input addressed → content addressed. Once you have the content address, you can fetch from any cache trustlessly.
So what’s left to implement this feature?
How is planning handled in the nix community? This could do with an epic, stories, and subtasks with detailed descriptions of the input, expected outcomes, expected tests, target repositories, and so on, in order for community members could pick up.
I see that Github has Github projects, but I’m not sure whether the community / maintainers use that or some other project management tool.
We use GitHub issues and projects, but Nix maintainers are essentially curators. So unless we have paid projects or lots of free time, someone else has to do the work and we’ll answer questions and make sure everything fits in. In principle that also involves the task breakdown, and @Ericson2314 can surely give thumbs up to that draft such that you can file it as a series of issues that are specific enough get the “good first issue” label.
Thanks @fricklerhandwerk . When you refer to draft, do you mean RFC in one of the repositories? And are these kinds of things discussed in a regular meeting?
Oh I meant your sketch of a plan in the post I was replying to. RFC is quite an involved process, and in this case a technical roadmap for a widely requested feature is a welcome step.
Yes, we regularly triage such contributions. Nudging maintainers also helps to keep things in focus, as the backlog is quite enormous.
Almost everything is implemented, but it is on branches that are now quite old that change a lot of code. Unfortunately, only a very diligent person very comfortable with git and C++ refactors will be able resurrect that code while taking less of my time than it would take for me to do it myself.
It’s David Riccardo’s comparative advantage all over again!
Maybe there’s something you are doing that someone else can do instead so you have time for the PR?