Comparing Nix and Conda

what’s next, viper? or rattle … another day, another package manager is invented.

The package king is dead, long live the packaging king!

Can someone admit that some language ecosystems and package managers are damaged beyond repair…

I can hear prebuilt dockers containers laughing from the shadows, ready to replace everything for the detriment of everyone.

1 Like

I’m pretty confident that in the long Nix is a way better fit for sci comp needs than conda. Nix Flakes are a real game changer. However, I keep running into some pain points and I don’t see Nix finding wide adoption until those are resolved (but maybe it’s just me getting things wrong).

I’ll use this thread as an excuse to mention some

  • Integration with non-nixos ecosystems
    • Dynamic linking: nix run nixpkgs#poetry run python, pip install ..., ipykernel --user --install (jupyterhub with user kernels) all ultimately fail the moment you import a .so because the interpreter won’t have the absolute RPATH set. This may well be the biggest pain point I have by this day
    • GL apps. There’s nixGLNvidia but last time I checked it was still incompatible with Nix Flakes. Even if it was compatible - I couldn’t convince people to accept this kind of UX, and I tried.
    • Nix is invasive.
      • The default instruction to install nix is to curl ... | sh which new people treat as fishy.
      • Nix wants /nix (although I see above linked posts on statically built nix and custom store locations)
      • dockertools.buildImage is great, but it needs nix on the host, opposed to a dockerfile one can just specify in docker-compose.yml in a repo that non-nix people can build
      • I tried using Nix to provide bazel and toolchain and to me it was great, but for the reasons above - instructing other (non-nix) people on how to get it running on their system has been quite frustrating
        • And I couldn’t “just give them the .so binary” because I need first to undo the RPATHs
  • Documentation, observability
    • Nix is dynamically typed and uses rather complex mechanisms like overlays and callPackage that give some “ad hoc” vibes. Each time I run into a new function - I have to begin a look up starting in all-packages.nix to figure out which file the name corresponds to, and even then I sometimes fail to infer the function’s signature. I keep doing this (until I find a better solution), but I can’t expect other people to commit to this
  • Cache misses. At times a huge build would trigger and it may be hard to figure out how to adjust the inputs so that a cached derivation is used instead
  • CUDA in Flakes means import nixpkgs { allowUnfree = true; ...} and it’s ad hoc again
  • UPD: just tried fetching opencv-python with poetry2nix (which has been mentioned above) and 1) it didn’t try to fetch a ready wheel, running a full build instead, 2) that build has failed; virtualenv way works with the same lock file. That is all to say - the UX isn’t smooth enough just yet, a lot of work needs to be done before we can expect scicomp people widely switching to Nix

Most of these sure can be worked around, and I wish I was more constructive/involved in improving things, but for now there’s just that…


Mach-nix from @DavHau has been working very well to me.

1 Like

Mach-nix is superb! It would be even greater if it supported pep517/pep518 (but then again, I haven’t done anything to help it support them)

I have been working on nix-portable with the goal to make nix non-invasive and available to restricted environments without userspaces etc.

It is basically just a fancy wrapper around proot + nix + channels + flake config.

I plan to distribute this via pypi and conda, so python users can just get nix as easy as any python package.


This is getting better. Various distributions ship Nix now:

That means that of the major distributions only Fedora (and derivatives) do not have a package. Though there are some attempts to package Nix in Fedora copr. For Fedora, having a package is important, since multi-user installs currently do not work with SELinux.

At any rate, I think that Debian/Ubuntu shipping Nix (and soon landing in stable/LTS versions) is really great, because people can ask sysadmins to ask an official package, which they are more likely to do than run curl | bash. This also works better if they are using existing tools to manage machines (Canonical Landscape, etc.).

The were some other suggestions, but what also works is using buildFHSUserEnv to build a FHS and use Poetry in the FHS environment.


Singularity is installed as root by cluster manager, but can be used without root privileges. The container runs as a normal user, you avoid all the security problems that Docker has. Singularity was designed specifically for HPC envs. Therefor it is installed in many HPC centers I would presume.


I discovered Nix in 2015 as I was looking for a user space package manager. I was working at the Los Alamos National Laboraotory (LANL) on a mix of desktops and supercomputers. Root access was not an option in any environment, and the installed Red Hat packages were limited and old. Working over the Christmas break, I was able to build the Nix packages I wanted on my home system with non-standard locations for the nix/store. I installed those builds at LANL, but I never figured out how to share those packages with colleagues. I used that arrangement myself till the system managers eliminated the locations where I’d put the nix/store.

Later, I tried conda for the same purpose. I found that it sort of worked. I could share specifications of environments, but I couldn’t share the actual binaries. So everyone ended up with their own copies.

I met Travis Oliphant at one of his shows in Austin and asked him if he knew about Nix. I’ve forgotten how, but he dismissed it.

Since left LANL I’ve run Nix on my home systems, I like it for the reasons others have mentioned. In particular, I like cleanly specified, isolated, reproducible build environments.

I think Nix would be good for serious computer programmers at LANL, but the learning curve is too steep for folks who are interested in “real science”, ie, those who don’t have enough time to do reproducible work. While the learning curve for conda is not as steep as it is for Nix, I think it is not as good.


The presentation took place yesterday.

I have spent a good part of the last 2 decades standing in front of rooms full of people, explaining technical material in highly interactive ways. Inviting continuous questions and observing the audience’s body language were both a fundamental part of my approach. This was a remote presentation where everyone had cameras and microphones muted throughout, until the Q&A session at the end. So, compared to what I am accustomed to, this left me with absolutely no idea how it went!

Very brief summary of the presentation:

  • Software packaging/configuration is a nightmare: Nix solves some of the major problems.

  • Brief outline of major problems, along with non-technical description of how they are absent with Nix.

  • Brief, high-level description of underlying Nix theory

    • pure functions
    • immutability
    • content addressing
    • DAGs
  • Briefly link theory to aforementioned problems and their solutions.

  • Emphasize unprecedented reproducibility: important for science.

  • Why we shouldn’t move to Nix:

    • biggest problem: HPC clusters
    • complexity for the ‘Nix-admins’ of the project
    • Our Nix Bus number is currently 1
  • Why we should stick with Conda: ease of installation on HPC clusters.

  • Value proposition: If you install Nix on a machine, I can guarantee that
    you can forget about config woes on Nix-using projects.

  • Live demo, width direnv support, of:

    • cd project-with-complex-installation-procedure: dependencies magically appear!

    • git checkout commit-with-different-dependency-versions: dependencies magically update!

    • Ephemeral package installation with nix-shell -p (I insist on avoiding nix-env -i)

    • Edit declaration of required packages in project’s shell.nix: package magically installed!

    • cd dependencies magically disappear, leaving an unadulterated environment.

I toyed with the idea of installing Nix and one of our projects, from scratch in a VM, in the demo. In the end I decided against it because:

  1. there’s a lot of stuff to download, and it would take too long,
  2. I didn’t want to risk the Nix bandwidth throttling the conference connection.


As I said earlier, I have no idea how this was received, because, throughout the presentation I had as much audience feedback as I’d get from a brick wall: for all I know, they all went for a coffee while I was talking.

The Q&A session afterwards didn’t suggest any obvious opposition.

From what little feedback I have, I (would like to?) think that the value proposition and the demo were convincing, and what went before gave some reason to believe that they weren’t just empty salesman’s claims.

Much of the discussion was devoted to understanding our HPC situation. It seems that Singularity is installed almost everywhere we care about. We have some machines which we control or where we have strong influence, which run CentOS 6, precluding a multi-user install of Nix. On some of these we’ll try Singularity. On the one we control fully, we’ll probably upgrade OS, but not for a while, as it is currently involved in important production runs.

MacOS was also discussed, in the shadow of the bigger HPC issue. I guess it will take centre stage once we get over the HPC hurdle. The specific project in which I am proposing to replace existing Conda-reliance with Nix, seems to work OK in terms of macOS-on-Nix. Further down the line, the biggest problem is probably going to be Geant4. As a non-Mac-user, I have no idea where to start on this. I am utterly confused by the status/need of XCode in darwin support in nixpkgs.

Final outcome

We will, at leisure, try running Nix in Singularity and take it from there.

(There will be second-order issues to address down the line, such as macOS, MKL, CUDA, AVX, etc. but I think that getting over the first hurdle of Nix acceptance will be a big leap forward for the project, which will pay dividends in the long term.)


Thank you all for your input on this topic.

At the moment it looks like Singularity might be a life-saver! As I have zero experience with it, I have to reserve judgement, but if it works out then it will have simplified the task significantly, and may well turn out to be the single factor which makes Nix adoption possible in our case.


Thanks for sharing that. Sounds like the presentation was a real challenge. Great job and good luck with the trial!

1 Like

I’m interested in your findings about nix+singularity! :grinning:
Especially which option is the most feasible: build singularity images using nix (potentially on another machine or inside a proot environment on the HPC) or have nix running inside singularity.


This whole thread is interesting to me, as I’m also working in a research environment, where conda is used, but it’s not really a standard, and people are unsure if it’s “good”. Also, recently the subject of reproducibility, or “linkability” came up. People were interested in how software can be treated like a paper, how it can be linked and uniquely referred to (sort of like a DOI). I think Nix fits very well here, as even older software can theoretically be completely rebuilt from its Nix build information (if everything is properly pinned, of course).
We’re also tentatively using Singularity for some projects, so I’m very eager to hear if you get any more feedback from colleagues, and if your Singularity+Nix efforts are fruitful.

I might even give a similar presentation to my colleagues and see how they respond.


I think the way to go is to use nix in singularity and then use a binary cache locally for fast download of packages not in nixpkgs when building the image. This will be much faster than using conda I think, and for the packages not in conda, that you have to build yourself, it will be instant in comparison to building it in docker / singularity each time.

To set these things up is complicated for someone not deeply into how nix and nix-store works. If nix community wants adoption within science, making a really good tutorial on how to add nix inside a singularity container is essential. Likewise so, how to use package of a specific version of a tool. In the end, also a docker container / singularity container that could be used as the local binary cache would be very helpful. How to integrate all this in a CI would be awesome.

The only ones that can increase adoption of nix in hpc are the users telling the hpc administrators they need it to increase the reproducibility of their builds and all the other benefits we know about already. The most efficient way to get there, is to make it super simple to use a specific version of a scientific tool from nixpkgs inside a singularity container.


there have been moments when (mini)conda with channel conda-forge was good and useful
but that is years ago

current issues make it almost useless:

(I have to check out mamba)

1 Like

After reading this topic top to bottom again, I am trying to connect the dots but I may be wrong doing so…
By no means I have any experience in HPC, but it seems that Nix Friday - [interview] floxdev - YouTube Flox might be able to fit in nicely in a HPC environment?
As mentioned also in this topic that Centos has been dropped by Redhat, now there is a window of opportunity for Nix(OS) as a great replacement for all those HPC environments running old Centos versions installed!
And with all the focus on Cuda arn’t we overlooking other options?
FOSDEM 2021 - GPU Computing Made Simple with the C++ Vulkan SDK & the C++ Kompute Framework (AMD, Qualcomm, NVIDIA & Friends)


People are on PyTorch/Tensorflow/etc. on CUDA. It’s great that there are alternatives, but it’s useless to tell people who are heavily invested in an existing framework that there is an alternative. If we can’t support the libraries that they use + CUDA, then they will just go somewhere else.

People will only switch if it supports their existing frameworks and the alternative to CUDA as an option that works out of the box and is in the same ballpark performance-wise.

ROCm/HIP on AMD GPUs/accelerators probably comes closest, but anyone who has used those in the real-world will tell you that it is a miserable experience. You’ll encounter lots and lots of bugs, bug reports apparently go to /dev/null, and there are a lot of performance regression compared to NVIDIA. My previous employer purchased some Radeon VIIs. But we ran into so many issues that even the ardent AMD fans in the group were swearing by our NVIDIA GPUs.

I think we should consider setting up sort-of-a SIG that collects money (or finds sponsors) to rent a build server and provide an additional binary cache for scientific computing. I am willing to put time in this, but I know that @tbenst and @zimbatm have done prior work on a scientific computing overlay, so I am hoping that they’ll also give weigh in on this topic.


import nixpkgs { allowUnfree = true; …} and it’s ad hoc again

If you don’t want to allow all unfree packages you can whitelist specific packages with allowUnfreePredicate or licenses with whitelistedLicenses, as described in NixOS - Nixpkgs 20.09 manual.

Probably I don’t really understand what you mean by ”ad hoc“ here, but wouldn’t the only other solution be to allow all unfree packages by default?

I think, the overlaying is really handy but the mechanism via the fixed-point recursion mixed with the different callPackages and mixed with the non-recursive .override* overriding is confusing, also to many other people. Therefore, in order to not damage things one doesn’t fully understand, it is sometimes recommended to do things in a certain way which then seems ad hoc to me, and somehow defeats the argument for nix that I understand what happens in general because it’s based on principles. In practice it makes no large difference if things are indeterminstic or only fragile. At least, if one’s hobby is not to investigate such problems.


Thank you, I didn’t know about the allowUnfreePredicate!

What I had in mind saying “ad hoc” (and it may’ve been not exactly the appropriate language) was rather that flakes themselves discard the licensing info. There’s no way to mark a flake that it imports nixpkgs with allowUnfree, nor is there a way to configure the system to allow building/installing “unfree” flakes on it.

1 Like

Are you aware of anyone successfully running Nix inside Singularity? (So far, we [one other person on our project: I have not had the time to take a serious look yet] have failed to get it to work.)