Tweag Fellowship: Fuzzing Nix #0

It is with great pleasure that I announce that Tweag accepted my project proposal as part of their Open Source Fellowship program :partying_face:.

This an awesome opportunity to give back, and be part of a community that already brought me a lot.

The project

At a high level, the project aspires to be a contribution to the effort of pushing Nix/NixOS towards mainstream adoption.
It aims to offer better protection for end users by improving the security and resilience of the Nix software components.

In particular, the project proposes to implement state-of-the-art “fuzzing” techniques and tools to discover bugs in:
The evaluation and building process (nix build); The daemon (nix daemon); Garbage collection (nix store gc); NAR serialisation and deserialisation; And potentially much more!

The list is non exhaustive, and several items already overlap.
Everything will likely evolve as I dive into the source code of Nix, and be driven by the results we will be getting.

What is fuzzing?

Fuzzing is a technique for finding bugs in software.
The basic idea consists in having a process, the fuzzer, to run the target program against a wide range of different inputs and observe its behavior.
Witnessing a failure in the target program means having found an input triggering an erroneous behavior: a bug!

Fuzzing cannot guarantee that a target is bug-free, but has shown to be very efficient in picking the low-hanging fruits and raising the bar for attackers.
In a sense, the project aims to answer both: Find the bugs that are discoverable with “basic” fuzzing; And invest time and engineering on a solution that attackers would have to outmatch.


Estimating is hard, so we gave ourselves a general direction, and a couple of areas to explore, depending on our success and the needs we might discover along the way.

Here is the initial tasks planned:

  • Writing and compilation of a simple harness:
    We intend use libFuzzer, an efficient in-process fuzzer, which comes built-in with the Clang compiler.
    Harnesses are software component that interfaces the fuzzer to the program under inspection.
    The complexity of this step reduces to compiling a C program that uses functions of the Nix codebase, with Clang.
  • Fuzzing locally:
    On a very basic corpus (for example, a simple valid nix expression), to make sure everything works.
    Then, leverage AFL tools for regular input management (mutation, and minimization).
  • Automation and packaging:
    Write scripts to avoid human operation, and package everything using Nix for reproducibility, allowing anyone in the community to look for bugs, but also contribute their tweaks and customisations.
    Ideally, the harnesses could be integrated directly the Nix repository.

To go further, we also consider the following ideas:

  • Setting up a testing infrastructure:
    Our eyes are set on OSS-Fuzz, which offers open source projects the capability to run fuzzers, making use of Google’s computing power, without too much hassle.
  • Measuring coverage:
    Coverage gives decisive insight about the efficiency of a fuzzing session, and what is not measured cannot be improved.
  • Adding sanitizers:
    Compile the harnesses, and Nix components with sanitizers, to make certain classes of bugs more apparent at runtime.
  • Creating a better corpus:
    Enrich the initial corpus with different inputs.
    For example, when targeting the parsing components, generate nix expressions from a grammar (taking inspiration from GitHub - dezgeg/fuzz-nix: Fuzzing the Nix interpreter with afl-fuzz), coupled with content that is known to be complicated to handle for programs (for example using GitHub - minimaxir/big-list-of-naughty-strings: The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data.).
  • Monitoring, crashes management, and writing tools for ecosystem integration:
    Build a set of tools for dealing with the results of the fuzzing process: view and triage the crashes.
    Ease the necessary human involvement.
  • Harnessing and more testing:
    Harness other critical components and functionalities (the daemon protocol, garbage collection, NAR deserialisation, etc.) of the ecosystem, and integrate them to the flow obtained until then.


I will work on the project for 12 weeks, starting on June 14th 2021; And I intend to regularly post updates about my progress on Discourse.

If the project sparks your curiosity or interest, raises questions or remarks, feel free to share your thoughts and react to this thread: feedback is appreciated.
Also, you can contact me directly here, or on Matrix.

Thanks! :pray: