Is there much difference between using nix-shell and docker for local development?

I’m still quite new to the Nix scene, but I feel like I’ve finally got the language down and am managing all of my personal projects and machines with Nix to great success. My company uses PHP and we have TONS of small deployed projects, some of which are still on PHP 5.5.9 servers! I used to use brew to link between different PHP versions to match the deployment environment and, especially since brew’s support for PHP is quickly declining, it’s been a life-saver to have a shell.nix for each of my personal projects.

I was giving a small presentation to my team about how I ensure that I’m keeping my environment pristine between projects by using Nix. One of my best examples was a new project that made use of Redis and the PHP Redis extension. For everyone else, who already had that extension installed because of previous use in various projects, it was surprising when I warned them that even though it “worked on their machines” it would explode on the stage environment. This was because I’d already set up a Nix shell file for the project and the unit tests exploded within the shell environment because that environment, which I’d tried to match as closely as possible to the stage environment, didn’t have the PHP Redis extension (I didn’t know about that requirement at the time). Adding that extension fixed the problem! Now I have a shell file for every project I work on, just in case.

My co-workers like the idea of Nix environments, but they dislike the idea of learning a new, complicated tool to catch issues like this. One said, “I don’t see why we don’t just make a Dockerfile for each project to accomplish the same result,” and since I’m not really much of a Docker user I didn’t have any response to that. Is there really any meaningful difference to us as developers when working on these separate projects between running our tests and other tools through Nix-shell or through a Docker container? I know that one issue is that of an immutable environment and that a Docker machine can “drift” over time, but I’m not sure if that’s really much of an issue if you rebuild the container every now and then.

10 Likes

You need to see a Docker image as a snapshot of a machine. It will stay like it is and thus anything you do with it will keep working.

However, making changes to such an image is where nondeterminism seeps in. Docker images are built using shell commands, usually it contains commands like apt-get update, apt-get install ... or wget .... These fetch information from outside sources, but because the outside sources change over time will result in different files being fetched at different times. In addition, there are no checks that the files that are being fetched are actually the ones you intended to download for your Docker image.

So, Docker images will stay the same, but building the Docker image will result in a different result each time you build it. No guarantees there.

Nix is different in the sense that the build process itself is resttricted so that no outside sources can change without that change being detected. Builds will result in functionally the same thing each time it is run. That also makes making changes safer, as a change requires a rebuild.

There are some other practical advantages:

  • Docker requires applications to run inside a container. With Nix you have the option to run inside container, but do not have to.
  • Docker requires a Linux virtual machine to run on OSX. Nix can build and run executables natively.
  • Nix has multiuser support to do builds under your own user, without needing a shared high privileged user (like root).

There are probably other differences, because Docker is kind of a different beast.

11 Likes

Another area that nix-shell is better is composability. You can potentially maintain common nix expressions among your many projects, and the shared build dependencies will only need to be built once. This could potentially greatly speed up your ability to work on a lot of small projects, if the alternative is you have to build a separate Docker container for each one.

You can also safely cache these build dependencies, so they only have to be built once and then can be shared among your entire team.

With Docker, you can share and cache the layers of containers, but you have to figure out how to order your layers to maximize sharing, and sometimes it is not possible to get all the sharing you need because two dependencies are kind of at the same layer.

7 Likes

See the following blog post by Maarten Hoogendoorn for a nice comparison between Nix and Docker:

6 Likes

Awesome! Thanks, everyone. I’ve got more reading and tinkering to do, but I feel much better equipped to answer this particular question now.

I never used Nix, but I use Docker on OSX everyday for software development containers.

One thing I tried to do with Docker was to have different members in the team to generate the same development image given the same Dockerfile, but I have found this is impossible to do, because, as you said, things like apt-get update , apt-get install ... or wget ... may fetch different files.

I suppose also that some commands run during image generation may generate files whose content depends on time or even random sources, thus making it impossible to generate a byte-by-byte equal images at different machines.

The only solution for this I see with Docker is to have a central repository where the images generated by a reference member are shared to all other members in the team.

What’s the Nix answer for this?

How can Nix handle time-sensitive commands when generating the Nix images (don’t know if this is the correct term)?

Thanks

apt-get etc. are not time sensitive, they are source sensitive.
The source may change (never versions of packages).
Nix works by defining an exact checksum verified source and exact build instructions.
This won’t necessarily result in reproducible builds (see https://reproducible-builds.org/) with bit identical images, but it goes a long way towards it, as the software has an exact definition.
However when pinning nixpkgs to a specific git commit you will never have a version change, as all sources and build instructions are exact and wont change. That also mean no security patches.
Nixpkgs unstable/master is the exact opposite: Constantly updating versions.
Nix channels (e.g. 18.09) is a sort of hybrid: It gets important updates including security patches during its lifetime (usually about 7 months after release) but is kept stable otherwise.

1 Like

Thanks for giving me the reproducible builds link. It seems that to get real reproducibility one has to follow a set of practices, like those in this link, but following practices “by hand” is error prone and susceptible of non-reproducibility.

It seems that Nix helps minimize this “by hand” effect, but is not a full reproducibility solution on itself.

This makes me think that the only way to be sure of real reproducibility is to have a central repository to share previously built images.

That’s not reproducible, that’s using a set version of prebuilt binaries.

1 Like

I see your point, the generate process is not reproducible, right.

The question is if it is possible to really create a reproducible process, especially when build tools (like compilers and others) generate binaries with embedded timestamps and random values, like UUIDs.

Anyway, having a central repo with tagged images that everyone shares, e.g., DevelopImage_v1.0, ensures everyone is using the true same development environment, without the effort to create and maintain a true reproducible process for generating such images. Central repositories still have their advantages.

As you pointed out there is a difference between reproducible build process and reproducible build results. If you pin nixpkgs (FAQ/Pinning Nixpkgs - NixOS Wiki) then your build process will be reproducible. Of course the build result will not necessarily be bit for bit identical, but probably close.

Nix and Docker operate at different levels of abstraction and seem to be designed for different purposes, I don’t know that there is a direct comparison between the two. You could view Nix as a collection of tools for building linux systems, under the philosophy that configuration should be centralized and declarative, and state change should be isolated. Docker is well-suited for building collections of systems that serve a single-purpose, such as serving web application or elasticsearch queries, which can work together as a collection of micro-services, to provide greater functionality. I haven’t used it yet, but my view is that NixOps is probably functionally similar to Docker, whereas Nix’s closest approximation would be something like Vagrant, but even this isn’t a great comparison.

1 Like

I think the biggest difference between the two is that running a Dockerfile twice doesn’t guarantee you get the same development environment due to its imperative nature (most Dockerfiles start with an apt-get update) , whilst a shell.nix file guarantees that two runs of nix-shell at different times produce the exact same development environment due to the fact every dependency is finely tracked and pinned to a specific version.

I think there are many parallels between nix and docker. Both (try to) build images reliably, both have a distributed binary cache. Both package applications as the closure of their dependencies. Nix just does a better job at the reproducibility part.

I would also like to point out, that Nix can build docker containers (see the dockerTools section on the manual) which sort of gives you best of both worlds. You get the reproduciblity of Nix, and the wide options of deployment of Docker.

Another blog post from a slightly different angle (reproducibility in research): Why Docker is not the answer to reproducible research, and why Nix may be.

2 Likes

This is why I was pointing out that they really are not the same thing at all. Running Nix commands could result in the creation of a machine with a GUI. Running Docker commands will never result in a machine with a GUI, but it might result in a network of headless machines.