After three years of going all-in on Nix in production (dev, CI, and deployments), I’ve written a detailed retrospective on the superpowers it gave us, the real-world pain points, and why I’m still convinced it’s the future
Trust existing package managers/lockfiles: Modern package managers like npm are already deterministic
for $work we also package a react Frontend (and node backend) and buildNpmPackage sounds exactly what you appear to want. It can either
use a FOD for the fetched deps
or parse the lockfile and use it for deterministic prefetching of all sources (see importNpmLock)
The latter appears to be what you are looking for. There are some rough parts here as npm can sometimes produce incomplete lockfiles, for this see npm-lockfile-fix (also packaged in nixpkgs). You can find our packaging regarding this here as an example.
In regards to searching for stdlib functions, you might find noogle more pleasant to use than grepping nixpkgs.
I didn’t end up having a chance to introduce nix at this scale, but this seems mostly congruent with my own opinion, except:
A Language Developers Already Know
TypeScript isn’t lazy, so you are sacrificing some of the core features that make Nix-the-language what it is. For example, I don’t think you could get NixOS modules to work with TypeScript nicely, so you will end up with two divergent dialects with regard to modularity alone - and there will probably be more things that will necessarily have to work differently between both, splitting the communities further.
I’d rather think on how to improve approachability of Nix-the-language to newcomers with better docs, LSP improvements and whatnot - this should reduce the need for random grepping (will not solve types, though). For example nixd is mysteriously missing do comment support, that alone would help a lot. And if we really wanted to risk fragmenting the community, then I’d rather do it for something like Nickel, and hope that its value proposition of Nix-the-language-but-better is convincing enough that most people move over.
And maybe I’m biased, but I never felt that having to learn a new language to use a tool is a big deal. If someone will not use Nix because it’s not their $FAVOURITE_LANGUAGE then their loss - I’m all for improving documentation, onboarding, approachability and discoverability for new users, but not for bending over backwards to mainstream language choices just for the sake of greater adoption. I know it worked for Dagger, but there at least an argument could be made that more programmability was actually in line with their goals and that CUE itself was undermaintained on top of being weird.
Rethinking Integration with External Package Managers
I hope that dynamic derivations will help here. I’m planning to work on a tool that leverages them to build AOSP incrementally and reproducibly. If that works well, then I’m thinking about modularising this to the degree that it could serve as a generic framework lang2nix tools could use to build from existing project definitions without the IFD overhead. No promises or timelines yet at this stage, though.
I’m very curious about your design for TypeScript-in-Nix. I’m one of the developers of garn, which is also exactly that, though it sounds like you’re going in the direction of shelling out to Nix rather than transpiling?
Regarding “Vercel for Nix” - we also did something very similar at garnix! We have hosting - both “real” deployments and preview deployments, with CI and cache integration (garnix is, primarily, a CI and cache service), and secret support (though not via string interpolation - curious how that would work?).
I appreciate that you took the time to write this, but imo you have some fundamental gaps in your reasoning around what the nix project should be doing next.
What’s holding back nix is that it isn’t typescript.
Typescript isn’t lazy. Laziness is an essential part of what is making nixpkgs work today. As @jaen says it’s also the basis of the module system. A world where nixpkgs is written in nix and NixOS modules are written in nix, and shallow expressions wrapping nix can be written in typescript is a skin deep aesthetic improvement that will collapse as soon as the developer needs to do anything interesting. All you’ve done is delayed the inevitable. If what you are a proposing is a full rewrite in typescript then whoever does the work needs to reinvent nix/nixpkgs/nixos idioms from the ground up to replace laziness as a tool for structuring (i.e. fix points / overlays).
You’d also need to rethink how and when nixpkgs gets evaluated if you want it to be typed. See the hard part of typing nix for a timeless breakdown of why you can’t just slap types on nix, again, unless you want to have locked typed pkgs set that is evaluated in advance that you can’t apply overlays to or anything like that. Again, skin deep improvement, or rethink a lot more.
What’s holding nix back is not delegating more to external programs
Nix achieves what it does because it refuses to compromise. The idea that you can just delegate replicability to upstream tools is exactly the opposite approach to how nix ended up being useful. Given half a chance the median developer doesn’t give a toss about casually reaching out the internet to download whatever, hashed lock files be damned. Dates are buildinfo that is inserted casually all of the time. Linkers and build tools running in parallel generate differently laid out binaries. We don’t even have, with all our nix bondage gear, a reproducible minimal iso. See @raboof’s recent heroics trying to get jemalloc to be reproducible. I can’t imagine trying to do this if half of nixpkgs was just “trusting certain tools to not do weird shit”. Build systems are the definition of “weird shit no one understands and doesn’t want to spend time on”.
I don’t want to see a nix with a sloppy outer layer designed to allow people that don’t care to skip the discipline that makes the foundation they stand on work. They can just use flavour of the month js build-system reinvention for that.
It is my honest opinion that we can push on with some of the things we are doing like doc strings for every lib function, and improving the repl workflow (which gives something well beyond stone-age tooling for instropecting options, evaluating expressions, discovering implementation, and reading library documentation). This would make the lives of most devs significantly better without needing to boil the ocean. I almost never wish for better tooling because:
Lots of the standard library is documented, both in the nixpkgs manual and using :doc in the repl.
If I wonder if what I am doing makes sense I just send a snippet to the repl. Nix is generally amenable to being structured in such a way that this is pretty easy.
Most build tools have a documented standard wrapper in the nixpkgs manual. When they don’t typically it’s a quick job to wrap it because there is a clear input gathering phase, in which case invoke gather as a FOD, followed by build as not FOD. I am sure that dynamic derivations (or the moral equivalent) will make this nicer.
If someone wants a step change in what is possible (full static analysis on arbitrary source files in a large “nix” codebase) it will require a complete ground up rethink and it will probably be a rebel offshoot of from this community with the resources and will to go it alone and return with something 100x better, which will not be backwards compatible. If it isn’t 100x better it won’t catch on.
I’m happy to take a correction here, but I don’t think guix has overlays? I agree with the sentiment that it doesn’t have to be lazy, but if you want to get rid of laziness you have to rethink nix idioms that rely on laziness. For instance, I think it’s pretty uncontroversial to say that evaluating the metadata of 100k, or even a million packages is completely trivial if the metadata is nicely organised, given modern hardware, so you don’t need to be lazy at all to do that, agree.
That’s fair. I guess I see there being room for one “winner” in terms of popularity, size of repo, but any project will be carried as far as it’s contributors are willing to carry it. The more nix-likes there are enforcing sandboxing and replicability the better, because they’ll exert cultural pressure to normalise nix-friendly values.
I agree that you do need to rethink a lot of idioms; a direct translation won’t work or be nice. I spent some time thinking about this (in the context of building, and now starting a rewrite of, garn). And my current thought about overlays and overriding is that it all becomes easy if you don’t have derivation be a special built-in (and don’t have derivations be this special, not-quite-just-a-attrset type). I think this is also what Guix does. Instead, just have normal objects that represent derivations, with dependency trees. An upside is that you don’t need .override and .overrideAttrs - it’s just normal object/record updates (I find this - and especially .override - a big wart in Nix, and makes the entire system much less easily programmable).
Then if you want to override a dependency used anywhere in package a, it’s just a recursive function applied to a. Simple, and I think quite workable. (Getting rid of derivation requires a bit of work, but seems fine too so far.)
I think it’s pretty uncontroversial to say that evaluating the metadata of 100k, or even a million packages is completely trivial if the metadata is nicely organised, given modern hardware,
Yeah, we did this in the dumbest possible way, and there was no noticeable slowness at all - unless you opened the generated TypeScript file directly, since then it actually tries to typecheck all of it, rather than just accepting it’s declared types. But having a single nixpkgs import is, I think, not even the nicest possible interface; it’s nicer to import separate rust, haskell, etc. modules.
Correct me if I’m wrong — because I never looked at Guix seriously, just trolled the repos some — but don’t they recover laziness by having a store monad and macros to paper over it?
The closest you can get to that in JavaScript (that is, laziness via monads + reasonable UX) is generators and even that is still not quite the same, because JS generators can’t be cloned, so not all patterns are expressible with it (though it’s possible they could all be patterns that a JS Nix layer could do without). There’s also the question of what would that do to TS type inference — and if we don’t have the nice LSP to go with it, then what was the point of migrating again?
EDIT: also, Starlark in Bazel is (IMO) another good counterexample. It’s a total subset of Python. As such it’s strict and imperatively biased. I’ve had had the displeasure of having to work with Bazel recently, in a task that required in-depth exploration of JS rules worked. And not having easy overrides you get in a lazy functional model was a royal pain to me — if you didn’t like what the rule was doing, the only option you were left with was forking.
Thanks for writing this, it points to real usability issues that people have when adopting Nix. It was a good reminder.
Those issues can be fixed, but they require to make bolder changes than the usual increments we’ve been working on until now.
Yes and no. What Nix does really well is force all the users to define all the inputs of a build. Not a lot of build systems do that.
When it comes to guaranteeing reproducibility of fixed output derivations, the language is currently relatively crude. What we do is ask the user to provide us with a NAR hash (which is not always easy to compute ahead of time), and this is how we validate that the content was fetched reproducibly.
Imagine there was a way to model and transfer that guarantee to other tools.
The best example is fetchGit that requires both the git reference, and the NAR hash. If we had some way to tell Nix that we trust the git commit to be reproducible, I believe we wouldn’t be losing on reproducibility. It would also make integration with existing lockfiles much more easy.
Similar thinking can apply to other hashes, like torrent magnet files, IPFS addresses, etc…
Nix builds overrides out of first-class functions and closures. So the semantics of overrides and overlays can be recovered no matter what foundation is used, as long as it has some pretty popular building blocs.
Syntax-wise, Guix kind of benefits from being in an ecosystem of minimalistic syntax. One can always build comparable function-call-only abstractions in any language with first-class functions and closures. What do you have for a nicer syntax depends on what foundation you use, indeed.
The availability of on-disk derivations as tangible objects (at least if requested) allows to build reasonable CLI tooling for some use cases, I guess you could have slightly better inpectability inside the evaluation instead? But then you somewhat lose in flexibility of choosing the integration interface cutting points.
If you actually design from scratch, you might want to reconsider in general that Nixpkgs is in fact a navigational database no less than a codebase. And maybe in a clean design with stereo-hindsight (comparing Nix and Guix ecosystems) one could invent something nice in terms of which builds are considered known-same, and which pieces of evaluation can be split into separate parts, maybe reusing the logic for interdependent build parts. So e.g. where CA derivations go too far or not far enough, how dynamic derivations would be designed if not constrained by compatibility, etc. (Probably a good idea to look up all the discussions, not just things widely deployed, and maybe talk to some people involved)
Nix-build-ing a data file from an external package database, then committing it as a part of an expression
replaceDependency
Guix grafts (if I remember correctly the naming)
Guix package version archive
CA derivations
Intensional store
A few things I forget now
I think all of those are pretty unambiguously identifiable within the ecosystem. There is a lot of interesting things said about each of them, and some implementation work and different amounts of deployment experience.
I have not in fact read everything useful about those; nor thought enough about what I have read to be confident in giving short overall summaries or proposing a few designs and writing pro/cons.
I suppose that’s true, but feasibility is one thing and good UX is another — we don’t all program in assembly or lambda calculus directly for a reason. And my remark about generators was with usability in mind, as that’s certainly a lot nicer of an interface than having to manually thunk everything into lambdas. Unless I misunderstood what you were referring to, in which case sorry about that.
I’ve been obsessing over the idea of some nix-thing (probably based off racket) that was tightly integrated with a triple store, for exactly this reason. I think going the whole hog an integrating a database that can be tied to a particular snapshot of code could be insanely powerful (especially something like a triple store inference layer). Unfortunately it remains a worthless idea, because I have zero time to spend on it.
Nix package file is an explicitly written lambda, JS syntax is not much worse. It could be autowrapped with overrides, using overrides would be just calling .override with an object literal.
Guile has the benefit of having both Guix and Guile-RDF if you want to try small steps. But yeah, some redesign benefits might require a lot of work at once
The Nix team has continued work on things like RFC 134 and RFC 136 which aim to create layers of functionality. This way the Store layer can be used on it’s own without the Nix-Language. Or one could create a new front-end as a flakes alternative, or tooling to wrap around Nix and make it easier to use.
I am also really excited for things like libnix cargo-nix-backend to ultimately be how many developers use Nix without really having to know it.
(Disclosure: I am on the same NLnet project as @Qknight, but the specific part of the project that that blog post describes is not my work. I don’t want to take credit for others’ good job here. )