Why Nix Will Win (and What's Stopping It): A 3-Year Production Story

Thanks, everyone, for the sharp, insightful feedback. This is a great discussion.

I wanted to address a couple of the major themes that came up.

On Purity: Trusting External Package Managers

Completely agree with Nix was able to achieve the impossible only through “refusing to compromise”.

The case of “trust existing package managers” perhaps could be a special case for leaf-node applications – e.g., for web-apps like ours that use nix/nixpkgs heavily but aren’t consumed by them. (Kind of like how IFD is supported by nix but disallowed in nixpkgs).

Stepping back though, my two other suggestions “Intercept at the network layer” or a more native “Flake Support for FOD Dependencies” are attempts that maintain strict purity. Can we find a way to be able to ergonomically delegate to package managers like npm while maintaining Nix’s requirement for purity?

On the Language: TypeScript, Laziness, and the “Skin-Deep” Problem

Great points, jaen and bme – that’s the core challenge I’ve been wrestling with myself. And 100% agree, a naive, direct translation from TypeScript is a non-starter since it won’t be able to leverage the massive nix ecosystem.

To clarify the approach of my PoC, as jkarni asked, it wasn’t a transpiler, but an alternative syntax. The idea was to extend the Nix parser (in my case, using SWC) to accept a functional-friendly subset of TypeScript that has a clear 1:1 mapping to Nix grammar (const x = 1let x = 1). The runtime is still Nix, unsupported TypeScript features would be caught by an ESLint rule.

My hunch is that this solves the “Skip-Deep Problem”: since the runtime is still Nix, you can call any Nix function (and vice-versa). The main work that would be left is generating .d.ts files for nixpkgs to provide the types.

I’m thrilled to learn about garn/garnix. That sounds like a transformative project that hits on some of the exact pain-points that could be a massive unlock! Excited to dive in more.

On The Bigger Picture: “Developers, Developers, Developers” :slight_smile:

These proposals are just potential paths to the same goal: making Nix’s superpowers accessible to everyone. My core question to the community is this: what do you believe is the single biggest problem to solve to unlock mainstream adoption?

2 Likes

I will admit this is a fun thread.

To your question though: I agree in substance that what people want is just in time learning directed by tooling as they try to achieve some other goal that has nothing to do with nix. Nix under-delivers on this expectation. I am probably being overly negative about the value of typed bindings.

3 Likes

The link says they do not have enough money to continue working on it. Is that up2date?

1 Like

That’s a question for @Qknight, but yes my understanding is that is up to date.

(I would certainly like to see some community-wide collaboration/financing on this sort of gold-standard “it’s just another backend to the tool you already use” lang2nix solution.)

That would be great!

A possible approach could be to allow marking certain binaries as pure and trusted, i.e. the output of running the binary is a pure function of the input, then those binaries would get access to the network (or other sandbox escapes, like accessing the root FS). A bit like how setuid binaries can get root. Then you could implement all your examples, e.g. for git you’d have a kind of git wrapper which does a git clone, verifies the commit, then some cleanup.

Of course the tricky bit would be to figure out how to manage those extra permissions, as just like for the setuid comparisons, bugs in those binaries would have security implications, or getting your own binary to be trusted would possibly allow to escalate privileges.

1 Like

Technically, the trusted binary is just trusted to check the hash, and we already run fetchers with relaxed sandboxing (i.e. with network). But yes, broken checksum check can break correctness expectations

1 Like

The difference is that currently you need to manually specify a hash if you use fixed output derivations, or extract a hash from the input somehow (like package.lock). With a trusted binary it would rely on the hashing/reproducibility that the binary already provides. E.g. a git commit hash for git, a package.lock for npm, Cargo lock for cargo, etc.

It’s basically an alternative to reimplementing the logic in nix using fixed output derivations like some projects do for npm, cargo, poetry, etc.

1 Like

There are multiple benefits from separate fetching and building though. And if you do separate them, the fetching wrapper can basically use whatever you pass to specify precise state to fetch as hash, no?

Some details based on how you pass the lockfile, I guess, but even if you are doing it incrementally, you probably want to figure out the interactions with dynamic derivations (and then still make final this-matches-lockfile verification runnable in proper sandbox, and compilation to happen later).

builtins.fetchGit is perfectly happy to operate with just rev and without a NAR hash. I suppose this is an additional restriction imposed by flakes / pure-eval? In any case, it seems debatable whether trusting Git hashes is a good idea at the moment, as forging sha1 addressed commit objects is to my knowledge feasible in principle.

Git uses a hardened version of sha1 that dodges the currently known attacks. Still on somewhat shaky ground, though.

1 Like

For sure, you’d trade off granularity for reusing existing tools, to get something more Dockerfile-like. One advantage of separating the inputs as you mention is that you can get better caching, but you could also solve that by providing persistent storage to the trusted tool, and trusting the caching done by the tool, which ties to what was discussed in Nix *could* be a great build system

Doing everything in nix certainly gives you nice properties (be it with precomputed hash, IFD or import from derivation), but it means you need to adapt every tool to use nix as a backend. While that’s a great goal, having escape hatches would allow more intermediate solutions, basically simpler *2nix tools, that don’t require (much) nix specific logic.

https://nix.dev/manual/nix/2.32/development/experimental-features.html#xp-feature-git-hashing

I would like to remind people that I’ve added git hashing to Nix itself, so it can be used by any FOD. It is currently experimental.

Yes, we should also use it. See Use SHA-1 only with collision detection · Issue #13544 · NixOS/nix · GitHub

I’ve also added SHA-256 support, for newstyle git repos. Hopefully GitHub supports and encourages them someday…

6 Likes

Half of the tools have their own uncontrolled escape hatches, though (see any discussion of «did you know how to add non-standard Cargo build step», sometimes worded as «help, there is an arbitrary code execution vulnerability in Cargo»). Reproducibility-controlling those does not sound feasible.

1 Like

what people want is just in time learning directed by tooling

Wow, that’s a fantastic way to frame something I never managed put words to. It’s the feeling I get whenever using TypeScript in a new library or framework. Documentation is always an auto-complete away – much more cohesive experience to discoverability and incrementally learning just enough to get the job done.

+100 that would be killer for Nix adoption

1 Like

True, but that’s more of an upstream issue. For cargo for example I’d hope ‘cargo fetch’ does not have that problem (then you can use ‘cargo install --offline’ in a normal derivation).

It’s also only a problem if you don’t trust your dependencies. So you could decide what is considered trusted or not.

As for reproducibility-controlling it, I don’t think it’s necessary, just like a nix derivations are not necessarily reproducible, it can be checked but it’s not enforced.

The point of Nix is brute-forcing upstream issues.

But if we separate fetching and building, like we should do, can the fetched stuff be checked offline against a lockfile (then this is the only part we need to trust)?

3 Likes

New slogan! Tee-shirt!

2 Likes

This is really well said. It reminds me of a talk by Brian Cantrill ([[https://www.youtube.com/watch?v=Xhx970_JKX4][Platform as a Reflection of Values - YouTube]]) which argues that long term development of languages/ecosystems/platforms can be predicted by their values—the things that they prioritize when tradeoffs must be made.

I like that any nix configuration I produce is likely to be reproducible. I like that escape hatches such as “non-free” require me to explicitly acknowledge it. That little bit of friction signals the community’s north star goal, and provides confidence that nix will grow towards something worth of my investment, rather than degrade into another bag of compromises.

At some point, I think there should be room for compromises, but only once the culture is strong enough to retrain its north star despite the availability of shortcuts.

Regarding the typescript points, I’m curious. Off the cuff, I wonder if this would work to create a DSL on top of nix:

  • Model a subset of the nix language in typescript
  • Write types for some API of your choosing (some subset of nixosConfigurations)
  • Write your configuration in typescript, which generates a nix module on run
  • Write a nix config to build the typescript, and then import the generated nix module

This seems like a fairly reasonable way to manage simple-ish nix configurations in your familiar language of choice, provided someone is willing to maintain the API that your organization uses.

Since a large part of the cited problem was new devs, it seems like such a thin wrapper could prove very useful. Afterall, the nix language seems so open ended that it seems as though some conventions must be established for nixos to work. Whether those conventions live in developers’ heads or in an explicit organization’s DLS, does it matter?

Disclaimer: I am new-ish to nix.

2 Likes

And there’s your issue. Once devs want to do anything non-trivial, they will have to “drop down”(i would argue for their benefit) to nix and deal with the intricacies of the Nix language itself. Better to smooth that learning curve(imo) by learning the language up-front, that way you’re not potentially debugging generated Nix code with no prior experience, or using a language you have no experience with. The experience with Nix the language is worth a steeper learning curve in the beginning, in exchange for a MUCH better user experience later.

Edit: To be clear, the basic idea of this, reducing Nix’s learning curve, is really good!

5 Likes

I’d also like to note that most of the listed complaints in the blog post aren’t really problems with the language:

You need to grok derivations, master arcane syntax, and navigate a standard library via GitHub search. I was already sold on the vision, so I persevered. Most engineers take one look at { pkgs ? import <nixpkgs> {} } and “nope” out.

We can argue about how “arcane” the syntax really is. IMO that particular example is probably the worst case as far as unfamiliar syntax goes, and is largely just { pkgs } nowadays anyway, which I’d argue is far less arcane. I’ve found actually understanding it to be mostly irrelevant when starting out with nix, too, at least with the people I’ve spun up.

I also don’t think typescript is that omnipresent that its particular flavor of arcane syntax for third party imports is somehow universally understood by new programmers who refuse to learn anything with even slight impedance; IME TS is actually quite an unusual thing to have experience with, even among web developers most folks know a bit of JS and have largely gotten away without ever learning about ES6 module syntax. I’d argue that Python is much more commonly understood, thanks to the common phycisist-turned-software-engineer career path, but this rapidly devolves into a language war, and will depend on the particular industry slice you interact with.

That’s a lot of words to say that I think focusing on the language is missing the point - not to say that people don’t prefer what they’re already used to, but when doing software work you ultimately have to learn little things all the time. Save for the most junior of developers, who already have to learn everything anyway (unless they happen to have specifically learned TS in their bootcamp, I guess), I doubt changing languages would actually meaningfully change the learning curve.

The other issues are significantly worse for new user adoption, and have nothing at all to do with the language itself, but more with the lack of tooling and how nix doesn’t really lend itself to writing good “aftermarket” tooling.


We do indeed do a poor job at providing what is needed for exploratory learning, and I think that’s the crux of the issue. It makes it impossible for the less engaged developers to comfortably work with what the more experienced ones put together when attempting to adopt nix into an organization. There’s barely any modern tooling for nix at all, and unless particularly engaged with something people rely 100% on IDE features when exploring something like this. Which is to say, I also fully agree with how @bme puts it:

I’m more hopeful for the current ecosystem. nix could totally be made more “aftermarket”-tooling friendly, or grow first-party tooling. I agree that it’d need some fundamental re-designing, and likely accepting the loss of backward compatibility along with a big rework to important ecosystem pieces, though.

3 Likes