State of Haskell Nix ecosystem (2024)

I would like bring some clarity to the state of the Nix Haskell ecosystem and to discuss how we can make it better by making it simpler, reduce maintenance burden and joining efforts. Looking at https://discourse.nixos.org/t/nix-haskell-development-2020/6170/2?u=malteneuss there seem to be at least two large competing Nixifying approches:

  1. Nixpkgs approach. Current “official” Nixpkgs haskellPackages (formerly haskell4nix)
    with some serious limitations:
  • Uses a cabal2nix Nix library, which parses but ignores the <package>.cabal packages version constraints and uses whatever package set versions that happen to be selected within Nix. This means that what is build outside of Nix with cabal build and what is build within Nix can be quite different.

  • Only one blessed stable package set, namely haskellPackages, for a fixed GHC compiler version (currently GHC 9.6). When i wanted to use a newer GHC version like 9.10 and newer packages sets like haskell.packages.ghc910, builds failed all over the place, which requires a lot manual work to fix, which as i understood is one reason for only one blessed package set. Basically, just enough Haskell support to package and provide single Haskell apps.

  • Doesn’t support cabal.project files for monorepo projects out of the box. At least i could find anything, so i had to use tools built on top like https://github.com/srid/haskell-flake/.

  • Probably no cross-compilation support (at least i haven’t found anything).

    The main benefit for this approach is that it doesn’t use import-from-derivation (IFD), which is forbidden in Nixpkgs. Also short build times as the NixOS cache usually has all packages.

  1. IOHK “haskell.nix” approach. Independently developed Haskell Nix setup with other but smaller limitations:
  • Lives out-of-Nixpkgs-tree. Likely receives less attentation and community support than possible.

  • Often long build times. When i tried it, required several GHC versions under the hood, not all of which were cached. Documentation on what is cached and not, seems to be out-of-date.

  • Quite different Nix setup and Nix values to use.

  • Uses import-from-derivation.

  • Support comes mainly from a single company in the volatile crypto industry.

    The big upsides are

  • Supports lots of GHC versions and arbitrary package sets, whatever is defined a <package>.cabal file.

  • Is developed and tested within industrial use.

  • First-class support for cross-compilation. Quit important for GHCjs, WASM and RISCV in the future.

  • Just works, at least in a few of my test setups.

  • Maintenance burden seems to be less, although i don’t know if this comes from the IFD setup or the project members just do the manual work more eagerly.

To me it seems that the haskell.nix approach is the way to go as @cdepillabout sometimes mentioned. It seems to work better out of the box, scales (works with whatever Haskell setup you start with), and seems to reduce the maintenance burden.

Can we allow IFD in Nixpkgs and make it the official way to Nixify Haskell code? If not and it has to live
out-of-tree, can we officially endorse it the Nixpkgs Haskell docs?

If both aren’t possible, can we push the Haskell tools to change? It seems that Rust with it’s cargo.lock file doesn’t need IFD and has a good official Nixpkgs Rust setup. If the deterministic lock file is the important part, maybe we can get Haskell developers to adopt this better (i know there’s some Cabal freeze file, but i have almost never seen it).

What do you think? (Also please correct me where i’m mistaken)

6 Likes

So the thing about haskell.nix is that it uses IFD and that’s not allowed in nixpkgs and probably never will be due to Hydra needing to be able to evalutate the package without any builds taking place. So we are kind of blocked and cannot adopt the way haskell.nix does it. We also cannot endorse it officially simply because we cannot cache anythimg so the user would end up compiling GHC from source and endorsing the IOHK bianry cache is also (hopefully) not an option.

So in my opinion the only way we’re getting out of this is if we get dynamic derivations out of experimental and then rewire the Haskell infra in nixpkgs to make use of them.

4 Likes

Last time I talked with @fgaz he was a maintainer of Cabal, and maybe has something to say about that.

Generally, helping upstream ecosystem tools to be easier to integrate with Nix seems like a good approach, but requires good ties to those communities.

2 Likes

Thanks for your responses.

What would we need from e.g. Cabal to avoid dynamic derivations/IFD? Is it a deterministic lock file? At least that’s what i think from experimenting with Rust and Python Poetry2nix.

Can you explain why this is a requirement for Hydra? Is it performance, or security?

Did you have a look into https://horizon-haskell.net/ ?

Afaik its for both, performance because Hydra first does an eval then schedules the jobs according to the hydraJobs attribute, well normally, nixpkgs is special but same idea. And also maybe for security, so that you can be reasonably certain just evaluation isn’t doing malicious network queries.

1 Like

@fricklerhandwerk I do in fact have Opinions about the whole nix haskell thing :smiley:

First of all, I think the current approach has its merits, and we should be aware of them when thinking about a replacement.

A single, globally consistent¹ package set has great value for a software distribution. Mainly, it means there is only one place where you have to perform package-specific fixes. Security fixes are especially important.

¹: Meaning that there is only one version of each package.

Only 147 packages from haskellPackages have multiple versions. Compare that to Rust, where every package can depend on many different versions of other packages… and this is what happens. Many people have written about the problem.

Haskell packages are, I think, particularly amenable to the construction of a consistent package set. This is probably because, unlike Rust, they suffer from the diamond dependency problem, that was actually embraced through Stackage.

What we are doing with Haskell packages is not possible for ecosystems like Go and Rust npm. When it is possible (in Python for example), we are generally following the Haskell approach. Consistency is —in the context of nixpkgs— a desirable property.


Now, while this may be true for nixpkgs itself, external development is another story. As you write, you might have to use a different GHC version, or a different version of a package deep in the dependency tree. In that case, I think tools like haskell.nix are a perfectly fine solution. Like callCabal2nix (which is IFD), they could even be included in nixpkgs, though I don’t really see why — they work fine as external tools for external development.

Regarding endorsement in the manual, I have no strong opinion.


But what if you still want to avoid the constraints of a globally consistent package set and IFD, for actual use in nixpkgs? I can see two options. Both are similar to the rust setup you mentioned.

  1. Checking in some kind of lockfile. This can be ­—for example— plan.json, a conversion of plan.json to nix, or something based on mitmproxy like deps.json from the new gradle builder.
    • Equivalent to buildRustPackage with cargoLock.lockFile
    • Probably wouldn’t be accepted for general use in nixpkgs due to #327064. Even if upstream tools are changed to directly generate nix-digestible lockfiles, this won’t change.
  2. Using FODs. I have implemented this at nix-build-cabal-project: Alternative Haskell Nix infrastructure based on cabal and fixed-output derivations
    • Equivalent to buildRustPackage with cargoHash
    • Like buildRustPackage, it requires you to rebuild the full dependency tree from scratch for every package.
    • I can see something like this being used in nixpkgs. Probably for a limited amount of packages (remember that this approach loses all the advantages of a consistent package set).

Finally, there will soon be a third way: RFC92 computed derivations / dynamic derivations. These will have the advantages of checked-in lockfiles… without the actual lockfiles. There is also potential for some interesting upstream integration. However computed derivations still have a long way to go before they are usable in nixpkgs.

9 Likes

FWIW, Rust is also amenable to constructing a relatively consistent package set, and I plan to try and make it happen. We will have to package multiple major versions of packages, but I don’t think we will run into a combinatorial explosion like we might with, say, npm. @adisbladis has also expressed interest in applying this approach to Go.

8 Likes

That is great news! I thought it was close to the level of npm, where node-packages.nix is just… pain. Glad to know I was mistaken!

2 Likes

Is it worth looking at Ruby in nixpkgs? It provides a set of Ruby modules, but the derivations are version-independent. If Bundler wants version X, it gets version X. I really appreciated being able to fix Darwin issues in one place for Ruby instead of like I have had to do with Rust where every package with an issue required me to copy and paste the fix into the package. It’s the one thing I really dislike about the FOD approach.

2 Likes

One more reason from the aforementioned Hercules CI blogpost:

Using different platforms (typically Linux and macOS) during IFD is one of the reasons why upstream forbids IFD, since their evaluator is running on Linux and it can’t build for macOS.

3 Likes

This is of course a discussion which we Haskell maintainers have quite frequently, I will just drop a few random thoughts:

  • haskellPackages has cross-compilation support, it might not be as smooth as in haskell.nix, but it works e.g. for ghcjs.
  • I believe that the globally consistent package set approach is the way to go in nixpkgs. However it would be very desirable to make it possible for consumers of nixpkgs to easily build Haskell build plans on top of that. i.e. support that aspect of haskell.nix in nixpkgs. I believe this to be fairly straight forward but someone has to invest the time.
  • Changing the infrastructure inside nixpkgs has a lot of constraints: backwards compat, eval times, generated file size, etc. and we are frankly not equipped to tackle this with the current available contributor-power. It would be absolutely awesome if anyone wanted to work on this and I would be very happy to support all such efforts with discussions and reviews.
8 Likes

I wasn’t aware of this project, thanks. Looks like it uses a more convenient structure of patches to maintain a greater number of consistent package sets. Is the intention to try it out and merge it into nixpkgs at some point?

1 Like

Thanks everyone. I have a much clearer understanding on the requirements that lead to the current state in Nixpkgs:

It seems unavoidable to use a single consistent Haskell package set (with at most a small amount of deviations like @emily proposes) for all Haskell apps in nixpkgs to satisfy maintenance and security demands as best as possible. Let’s call it a Haskell “Nixpkgs build plan”.

It should be possible to support Haskell “Cabal/Stack build plans” (use package versions precisely as Cabal/Stack would without Nix)
within Nixpkgs with approaches without IFD like

This is what “haskell.nix” currently achieves through IFD.

As a low hanging fruit i would like to document this and many of the helpful comments somewhere more visible, because forum posts like this can be hard to find. Would the Nixpkgs manual be a good place?

I’m looking forward to the initatives by @fgaz @emily and the others.

5 Likes