Thousands of Lambdas

Found this article on HackerNews:

It borrows some ideas from Nix, derivations, lazy evaluation of dependencies. They also have some features we do not have.

  1. They use a “fake” gcc tool chain to watch the make process and create lazy thunks as they go. This allows them to have fine-grained parallelism within a package. Thus they can easily split a package into smaller components.

Thought: there are some large packages and that take a while to compile. With slightly finer resolution, we can split a package into phases or sub-phases and distribute work better. Can we also “mock” out a particular tool and create sub-derivations in a similar manner?

  1. They have workers that can spin up on demand. For us this may be as simple as creating a cloud resource with Nix and appropriate accesses. NixOps can do some of this, but not quite as user-friendly as spinning up Lambda’s. They also do some interesting work where the workers directly exchange dependencies, not sure how mature that capability is, but at the very least we can consider the creation of workers on-demand.

Can we borrow the splitting method to create sub-derivations automatically, so that we can use nix as an incrementally build tool?

Without having read the paper, I have heard of people trying to do something like “one derivation per file” to get more granular nix builds, but I don’t know anything about it.

An AWS Lambda function with 512 MB of memory costs $0.030024 compared to an On-Demand server with the same stats costing $0.0059. So when your CPU is being fully utilized all the time, running on Serverless may not be cost -efficient for your workload.

But if they did use spot fleet as an engine, it would become much more interesting

snack by @nmattia does the granular builds. It builds Haskell projects with one derivation per package.

1 Like

They have the concept (as should we) of various backends. Offering the possibility of a standard fleet, spot fleet, NixOS based spun-up fleet, lambdas/etc can/should be made agnostic. As we get closer to reproducible builds and the intensional store, it should become easier.

I had a thought for a tool. It would build your project with and without a patch, but checkpoint’ing between phases to see if that patch actually changed anything else in the build (undo the patch, hash the NAR, compare). It may be able to automatically detect when a change really matters and offer a path to simpler/faster rebuilds for trivial changes. Could integrate with the intentional store well.



I have a POC of running Nix in AWS Lambda:
It involves several hacks and patching/hacking of libraries, but it has basic minimal function. Given all the inputs for a simple derivation, it can then go build it.

Biggest hurdles

  1. lack of root/chroot:
  2. lack of pseudoterminal in Lambda environment, Nix really wants to have one, LD_PRELOAD’ed past this with some custom horrible code.
  3. No access to /nix/store : use nixrewrite (
  4. Nix kills builds too fast (or something…, perhaps Patch it!
  5. Custom go runtime, runs a shell script, which runs nix-store. Expected input is a JSON string base64-encoding of a NAR.

Why? (oh dear god! why?!)

As others have pointed out, dedicated servers are far more economical for continuous use. But sporadic use may benefit from a Lambda approach. And just to see what hurdles I ran into. Tested several techniques, learned a lot about LD_PRELOAD. Dug into Nix internals. Potential to use Lambdas to process simply defined workflows for jobs in the vein of funflow, fractalide, etc.


Thanks for looking at Nix in an exotic environment!


I did a package for gg, if anyone interested to try it:

Unfortunately, I get an error when I try to run it.
I tried to build mosh with gg with this shell.nix:

{ pkgs ? import ./. {} }:

with pkgs;

let pkg = mosh; in

mkShell {
  buildInputs = [ gg ];
  inputsFrom = [ pkg ];

  shellHook = ''
    export src=${pkg.src}

    export GG_MODELPATH=${gg.wrappers}
    export GG_STORAGE_URI="s3://$AWS_REGION"

    unset SOURCE_DATE_EPOCH # unsupported by gg

And this session:

cd mosh-*
gg-infer make -j8
gg-force --jobs 100 --engine lambda src/frontend/mosh-server

I get this error:

$ gg-force --jobs 100 --engine lambda src/frontend/mosh-server
→ Loading the thunks...  done (8 ms).
↗ Uploading 526 files (46.0 MiB)... terminate called after throwing an instance of 'ssl_error'
  what():  SSL_write: error:00000005:lib(0):func(0):DH lib
terminate called recursively

I may have configured AWS environment variables incorrectly.

Something really strange happened: the package build correctly on one machine (a remote builder, thus building exactly the same derivation), but not on my machine.
I get:

$ nix-build --builders '' -A gg --check
./ /build/source/toolchain/bin
/nix/store/xfghy8ixrhz3kyy6p724iv3cxji088dx-bash-4.4-p23/bin/bash: ./ /usr/bin/env: bad interpreter: No such file or directory
make[3]: *** [Makefile:809: toolchain.hh] Error 126

If I look at the temporary build directory, the file is patched for removing the python2 reference, so I don’t understand this error.
That may show some impurity in Nix, or I missed something obvious.


I don’t know about your first error, but the second one looks like it’s trying to invoke /usr/bin/env which isn’t in the sandbox. I’m not sure offhand how to fix this, but you could try doing a substituteInPlace on to replace /usr/bin/env with ${coreutils}/bin/env.

1 Like

I found myself having to add links to /bin/sh and /usr/bin/env from coreutils as @lilyball suggested. The next goal I have is for gg to run nix builds, not just make builds.

1 Like

snack by @nmattia does the granular builds. It builds Haskell projects with one derivation per package.

offtopic, Ollie Charles works on something similar which uses Nix under the hood of GHC

It would be interesting if Nix with remote builders can scale to AWS Lambda like gg does. That would add much value to projects above.

1 Like

For that the remote builder protocol would need to be much improved. For example ssh connection should stay open and things should be properly pre-compressed and probably not copyied over ssh.