Difference between "normal" and binary packages

This is a noob question but here i go:

What is the difference between the “normal” and the binary version of a package from the end user standpoint? Which one should i prefer if i have got the choice?

E.g. i can choose between nodePackages.vscode-json-languageserver and nodePackages.vscode-json-languageserver-bin. Which advantages does either one have over the other?

1 Like

One is built by hydra, the other is downloaded pre-built from some “official” upstream.

In theory, there should be no difference, as both should be built from the same source (well, not always, the adoptopenjdk-* packages are deliberately different versions, but you get the idea). In practice, things like the compiler version, library dependency versions, build host kernel versions, date, time, hostnames and all kinds of minor details of build environments mean that there will be differences.

The differences are typically more academic in nature, because they should never cause a functional difference, if only because the test suites try to ensure that.

They can sometimes have practical effects though. For a user, the question you should be asking is one about trust. Do you trust the “official” upstream or nixpkgs more? After all, one of them might have malware on their build hosts that make the binaries you download malware as well.

Personally, I prefer the non-bin packages, simply because that way I only need to trust the people who build my OS already, so there are slightly fewer places where things can go wrong. nix is also especially good at ensuring that build environments stay consistent, so you can reproduce the upstream build to assert it’s all correct.

There is also the minor benefit that I can patch the sources much more easily, in case I want to fix a bug downstream or something. And in theory it means I can build all of them with -march=native set like in gentoo (though have fun with that :wink: ).

That said, a lot of packages are not explicitly marked as “bin”, so in practice it’s not easy to ensure that you only use hydra-built binaries. Hence I tend to use whichever works, trying the non-bin variants first. It’s also just less important for non-native binaries like these JS scripts you mention.

1 Like

The binary version comes directly from the binary given by the software provider, and it is patched by nix to ensure that the dependencies all end-up in /nix/store as usual packages. On the other hand, normal packages are compiled by nix directly from the source code of the package to obtain the binaries, and the binary obtained don’t need to be patched as nix’s compiler is creating binaries with dependencies appropriately located.

Each approach has their pro and cons (and I surely forgot some of them):

Pro of binary over source package:

  • binary can come with more dependencies: usually nix does not pre-compile packages that need proprietary softwares. It could be an issue, for instance I think that blender does not come with CUDA enabled by default so you need the bin package to get cuda or compile it yourself.
  • sometimes simpler to packages: sometimes it can be quite hard to build from source (election, java…) and packaging the binary directly may save some time
  • quicker to build: since you skip the compilation, if your derivation is not in the nixos cache for some reasons it can save some time
  • binaries provided by the software provider can potentially be more optimized (EDIT: subject to debate, see below): for instance Mozilla compiles firefox with some optimizations (PGO) that depend the the typical use cases of users on firefox. If you don’t have the right compilation profile you can’t apply the same optimizations and you may end-up with a slower program. (not sure about this as NixOs finally added support for PGO)

Cons of binary over source package:

  • less transparent: you need to trust that the binary sent by the company is obtained from the sources. Hard to detect proprietary blob
  • it is not as flexible: from the source, you can choose precisely which compilation flag to enable, which revision of the source you need, you can test Pull Requests without waiting… you may also enable more optimizations options.
  • building from source provides you a dev environment for free
  • dealing with new architectures/cross compilation is certainly easier to enable in that setting since you don’t need to wait for the package provider to provide the appropriate architecture
  • reproducibility: as pointed above nix is quite good to reproduce builds (can’t be guaranted however)

EDIT: sorry just saw your message tlater hope I’m not saying anything wrong ^^


The opposite is really the case. I mean, either could in theory be “more optimized”, but Mozilla need to build a binary that can run on more varied systems, so they can enable fewer optimization flags. nixpkgs knows exactly where your package needs to be able to run, and with which libc, so in theory could enable more aggressive optimization.

In practice, on average you will probably never notice optimizations in either direction. But you can tune them yourself if you care enough and you use a source package.

Also I’m not sure a user cares about whether a package is easier to package ;p And as long as you’re on stable, you should never run into a situation where you’re building locally, so I wouldn’t call that a benefit. That’s just like saying the nixpkgs package can still be built if the Firefox website is down; technically a benefit, but not really something you’d expect in normal use. Just update another time.

Other than that, yeah, pretty good list of trade-offs :slight_smile:

1 Like

When I wrote this sentence I had this issue in mind where one year ago firefox from source was 20% slower than firefox-bin because of an optimization enabled in the binary version of firefox that was doing profile-based optimization (PGO), and many people argued that it was not trivial to enable it in Nix. PGO was only enabled in nix roughly 6 months ago in this PR, and I’m not sure how the two versions compare now.

So I guess this point depends on the effort nix and upstream packagers put into building their binary ^^

1 Like

Yep, that’s ultimately it. Another aspect of the question of “trust”, for sure, who do you trust to create the most optimal package? I think this aspect converges though, over time downstream packages tend to catch up with upstream, because they put more effort into building - it’s the thing downstreams tend to know how to do really well, after all :wink:

1 Like

Thank you all for your input! I will use the non-binary versions then, unless there is a special reason not to with a specific package.

also many “binary packages” are not disclaimed as such. Besides, many packages on nixpkgs cheat (e.g.g 2nix tools and other builders on nixpkgs like the maven one) for practical purposes by only building the package itself from source but not its [transitive] dependencies by using a fixed output derivation (FOD) technique which downloads pre-built dependencies (not by nix but from official but repositories like from npm, maven, crates, …) putting them all in a derivation for usage by nix…

1 Like