I too really like nushell from my couple of attempts using it but I see two issues with it:
Hard to bootstrap. It needs rust which has quite a few dependencies itself and none of those could use nushell.
It’s quite a moving target still. We need something that has a stable syntax that isn’t invalidated every few years, requiring possibly thousands of packages to change their code.
Even if true (which I don’t know of any survey done to confrm), that would only apply to current maintainers, and shuts out new contributors who don’t know Rust. And it’s not the kind of language you pick up overnight, I can barely keep track of how to read it sonce they keep changing the language every 3 months, much less write it.
And on that note, its instability makes it quite unsuitable as well, as rust releases often cause massive breakages in nixpkgs (see: rust 1.81). At least old python versions are kept around in nixpkgs, giving maintainers a little bit of time to catch up by pinning python temporarily, with rust you’re forced to use current rust and all tooling stays broken unless rewritten appropriately.
Java and Swift don’t, to my knowledge, break existing syntax. Java in particular is known for its rabid backwards-compat, though it’s probably unsuitable for tooling- and overhead-related reasons.
What broke nixpkgs with 1.81? To my knowledge Rust has “editions” to prevent breaking changes to the syntax affecting legacy code.
I’ve certainly not had the experience you’ve had with Rust regarding constant change, in either case. Are you sure this isn’t from pre-2015?
Swift/Go would indeed be good alternatives. I suspect those languages see less adoption in the FOSS sphere because of their association with Apple/Google, rather than for any technical reasons.
Going back to the initial question; I think the main issue is that bash is an inherently bad programming language and thus people understandably try to avoid it as soon as something becomes even only a bit complicated.
Arguing whether Rust is a good fit won’t solve that problem - as much as an RFC to define which languages are “allowed” would surely cause nothing more than endless discussions. The shell is still there as the “first” interface that’s being abandoned.
Replacing bash in the stdenv with something modern is IMO the only real long term solution.
And currently I only see 2 options. Nushell and Oils.
Given that unexpectedly many tools depend on /bin/sh (e.g. make), a legacy shell would still have to be there for the foreseeable future anyway.
With Oils it would be able to fulfill that role by replacing bash in place and just keeping the existing stdenv. People could switch to the new features with shopt -s ysh:all anywhere currently bash is used.
With Nushell we could have bash (or e.g. dash which is quite a bit faster) available with a new stdenv. And people can switch to the new stdenv.
If the modern shell still isn’t enough, a language like Rust can then be used, but it wouldn’t happen very often.
Both of the options are valid. I obviously favor Oils, but it’s up for dispute (and I guess which language becomes “stable” first?)
I too love nu; it has been my login shell for quite some time now.
With its fundamental focus on working with structured data transparently it also seems like a natural match for nix: nix at evaluation time, producing data for nu at runtime, with the potential to remove complexity in both — instead of (for example) emitting a lot of bash templated by that data in nix.
But these two issues are very real indeed. The second is well known, but the first is something I hadn’t particularly considered in this context.
I think at that time there was just nobody who really actually tried? Also back then there was just oil which was the python interpreted (and thus very slow) version. Since then the project has been renamed to Oils or oils-for-unix with osh as compatible shell and ysh as shell with shopt -s ysh:all already set. And the C++ transpilation & GC have been finished.
Currently preventing are a few bashisms not implemented in Oils. Most notably let and declare -i - the later being used in the nixpkgs stdenv at various places.
Either someone has to contribute the feature declare -i (and thus probably a new “bash integer” type) to Oils or we have to remove these features from the nixpkgs codebase (and e.g. use ((x+= 1)) instead of just x+=1 where declare -i is removed).
I’ve gone through the process of debugging such issues, storing patched files “out-of-tree” in in the oily-nixpkgs repo & discussing possible fixes with the Oils maintainer various times (e.g. previously broken [[ -v myarr[element] ]] has been fixed in Oils). But I’ve not yet been able to get something compiled, though I believe I’m not too far from having found all Bash-incompatibilities.
The process of debugging templated bash snippets coming from anywhere being evald, etc. id quite time consuming
One would think that Bash can be replaced “as the user frontend” (e.g. just the phases themselves) while the logic in the background is still Bash, but at least from what I’ve seen in the code I believe this would be quite a bit harder than just making the code Oils-compatible because there’s no clear distinction between “code” (as in the “stdenv” bash code) and “data” (as in user supplied bash code). I could be wrong, though.
It might be easier to just rewrite the ‘bash-stdenv’ code in Ysh, but I’d still be unsure how well the “surroundings” like all the compiler wrappers, etc. work. IIRC there’s a lot of magic happening when build helpers e.g. detect if a certain phase exists and act differently, etc.
thanks for the explanation - and if you end up making any progress please be sure to report back, i’m sure i’m not the only person interested in hearing about it
Note that the breaking change caused by Rust 1.80 is almost unprecedented in the history of Rust and many people inside the project are very unhappy about the way it was gone about. Indeed, newer versions of Rust have introduced a special warning for the breaking change, which goes to show how rare this kind of thing is.
It was definitely handled imperfectly and Rust will deservedly have to eat the reputational hit for stability, but 1.80 is not a good argument for Rust being cavalier about breaking changes in general, and I expect it will come up in Rust project discussion of any backwards‐incompatible changes for a long time to come. They’re far more thorough than us in general, e.g. with tools like Crater to test compiler changes against a huge swathe of the ecosystem.
Also, the reason it took so many commits is because our current Rust packaging story is bad. The vast majority of packages just needed their time crate dependency bumping, but we have no way of doing that in one place. Once I get around to Investigate packaging Rust crates separately · Issue #333702 · NixOS/nixpkgs · GitHub, one single commit to bump time would have handled the vast majority of broken packages. The pain we experienced here was in large part self‐inflicted.
It didn’t slip through at all. The team that decided on the change were aware that it was incompatible, had Crater run results that told them that thousands of crates would be broken, and then they made a bad decision that made people unhappy. That’s a problem, but it’s not a problem that looks like “they change the language so much that they can’t even tell when they accidentally break something”. Although Rust does pile on features fairly regularly, the kinds of language changes that change how existing type-checking works rather than simply adding new orthogonal features are rare.
To be clear: it’s perfectly legitimate to have opinions on the rate at which Rust becomes more complex, and it’s perfectly legitimate to have opinions on the 1.80 compatibility break. My objection is only that the story you are using to conflate the two is factually inaccurate.
I came across some really interesting developments in this regard yesterday actually:
If that project came to fruition and then becomes maintained going forwards, it’d reduce the bootstrapping problem around nushell to a point where it could reasonably be solved I think.
JavaScript tutorials, tools and examples are everywhere, and thus I guess (if it is allowed) the LLM generated code for JS be among the most ready-for-production ones;
Unlike other tool/toy languages, JavaScript is standardized as ECMA-262, so version changes can’t break the existing code;
What else handles JavaScript Object Notation more natively than JavaScript?
There are even many languages compiled to JS to eliminate its defeats, so it won’t be too big a problem if someone (like me!) hates its syntax and/or semantics.
However, the builtin std and os modules of quickjs are too simple and rough to be used for scripting. So if Nixpkgs accepts it as an official scripting language, some additional utilities must be provided, just like those Bash functions.