Tweag + Nix Dev Update #2

one week later

I took a week off of work, and so this one is a bit delayed :slight_smile:

Here is what Tweag’s been up to lately.


Eelco has been continuing to work on flakes, in particular a system-wide flake registry and an evaluation cache. Some recent testing has been showing the cache being able to cut evaluation time down to less than a second. Since flakes’ evaluation is pure, this should be safe and not have cache coherency issues. Check out the caching work in and also several other commits before this one. He’s also fixed nix search, which was broken in the flakes branch.

Separately from that, Eelco has been working on stabilizing the nix command interface. More to come on that another day.

Content Addressed Store

@regnat has been working on an implementation of So far his experiments can build and depend upon CAS derivations, but they cannot be fetched / substituted. Looks like an RFC meeting is being scheduled to discuss this further.


@garbas has been spearheading this project with his work on the marketing team, and regular work on improving the website and its messaging. One of the projects has to do with the options and package data files becoming quite large. Because of this, he’s been working on making a nicely searchable interface for the NixOS options and packages, using ElasticSearch for an index. This is still WIP.

Rok has also created a new Governance page to describe teams of people: NixOS - Community

The download page has been revamped as well: NixOS - Getting Nix / NixOS


@adisbladis and I have continued working on NixOps, and are working on describing a NixOps 2.0. Overall I’ve set out some goals:

  1. Simplify NixOps core to a simple-but-extendable nugget. Keeping NixOps to a reasonably sized project and repository, I believe the project and its code can improve more easily. This also means authors of plugins don’t need to be committers to NixOps or ask permission to maintain their plugin.
  2. Improve the code quality and consistency to make working on it easier and more fun. This has taken the form of requiring all code be formatted by black, and by requiring that no PR reduce the code’s type coverage – enforced by mypy. This also means asking for documentation at the same time as PRs.
  3. Run automatic tests on every change, making it easy to approve and merge changes. This means unit tests, but also eventually full integration tests. Eventually (soon) I’m hopeful NixOps will run actual nixops deploys on every pull request. I would like to also extend this to testing with plugins: having PRs to nixops also run the tests of various plugins.

This first bit is important: if a feature can only be implemented in NixOps core, it is less likely to be experimented with and improved on. One key example of this is the encryptedLinksTo feature: providing an automatic “mesh” VPN is pretty cool. Including it in NixOps core makes it hard to swap out the existing one (implemented with SSH) to another version (implemented with wireguard, for example.) Maybe it could be done without too much trouble, but needing to find a way for them to coexist would be much harder than simply leaving out the SSH-based plugin.

Overall, this has lead to some PRs removing quite a number of features from NixOps. The features we’ve been talking about are things like auto-raid0, auto-luks, built-in support for backups, encryptedLinksTo, and templates. This sounds like a massive breaking change, but each of these can, (and in my opinion) should exist as an extensible plugin.

I know there are users of these features who don’t want to see them go. I’m not wanting to make them go away forever, just out of core for 2.0. If you have opinions, please get involved so we can know! It can be hard to know if some feature is totally vestigial cruft or actually well loved. If you’d like to get involved and talk about your use cases for NixOps, please post a comment on the NixOps 2.0 issue :).

Documentation Formats

Very minor update: @nlewo took a look at porting the documentation to markdown in addition to restructuredtext and the asciidoc ports. Overall, though, he wasn’t impressed. That said, he told me he wasn’t much of a fan of markdown anyway. Since we didn’t come out the other end of it with much nice to say, we decided to say nothing at all thinking that maybe someone who is invested in the markdown tooling ecosystem would have better luck publishing a well balanced review.

I’m personally excited to see things moving forward on that RFC.

Software Heritage Archiving

Under a grant from NLNet, @lewo has been working to mirror sources used by Nixpkgs in to the Software Heritage archive. He’s spent a good bit of time on software to identify sources and get them mirrored in their long-term archive. As I understand it, his work to identify sources can find about 27,000 sources. Out of this, we can easily mirror about 20%.

One point of discussion is our somewhat bizarre builders like buildGoModule which take a single hash of all the builds’ dependencies. These fixed output builds are sort of cheating and not actually reproducible over time. These types of builds have been the subject of discussion, and there has been some question of how do we define a good fixed-output build vs. a hacky fixed-output build. I think Antoine’s work on this archiving project has given him a good perspective to help make some guide-lines.

I hope this is interesting and useful!


@grahamc can you explain what makes them non-reproducible?

@asymmetric It would actually be nice to be able to rely on Software Heritage to preserve sources we are using. This would improve the reproducibility because Nix could then fallback on Software Heritage when the original URL doesn’t exist anymore.
This is fallback mechanism is however harder to achieve with the fixed output derivation produced by buildGoModule because Nix doesn’t have any knowledge on the URLs actually used.


Sorry to zombie post. What’s the state-of-the-art on falling back on SWH? @lewo

I saw this blog post laying out a potential problem with tarball hashes vs content hashes, but since then RFC 0062 was approved, so some fetchers can have content hashes?

I saw something by Obsidian Systems on SWH and IPFS, but their status is unclear.

The next step is

Obsidian’s SWH × IPFS integration (blog post) is done. Unlike our Nix × IPFS work, it’s a plugin (to IPFS), not a fork, so nothing is blocked on upstreaming.

The RFC would allow us to add git hashing support. With a little bit of addition work after. A basic IPFS integration or direct SWH integration can also be done as a plugin.


It’s been merged! Looking forward to using IPFS to update my NixOS


Can we use this in our configs, or is it more like a proof of concept?

Would it be possible to link to the RFC in question? I’m a bit lost…

It’s probably a reference to

1 Like