Nix shim as an alternate workload for kubernetes?

PlatformCon 2023 is going on. During the first talk:

the presenter explains how Web Assembly can replace the containerd shim with a wasm shim that runs wasm workloads directly (instead of containers).

The idea I’ve had for a while but now want to discuss viability publicly: develop a nix shim that works with containerd, replacing runc, that “spins up” Nix workloads without any of the container semantics involved at all.

This seems like a very cool use case to propel Nix into the distributed computing world by using Kubernetes.

To play my own devil’s advocate, the first con that comes to mind is that nix doesn’t provide any sort of isolation (by default) like containers do. However, if isolation is necessary for security reasons, there are methods of isolating processes similar to what containers do (but without the strict semantics of using OCI containers with the layered images etc).

What do you think? Is this an idea worth exploring?

EDIT: fix containerd vs runc confusion

3 Likes

I’m not sure if this is where Nix would be optimal regarding Kubernetes.

I don’t understand this. Are we talking about Nix the package manager? Nix during builds uses bubblewrap for sandboxing already.
If you are talking about just using Nix to build on-demand things and run it as a classical thing, you may as well, chain nix and runc and this becomes clear that Nix is just not providing any real value, is it?

You can just have Nix for your build phase.

For run, this seems to be akin to reinventing existing shims IMHO.


I think your focus is really about building OCI containers and smart ways to do it.

I believe this is what you are looking for, just for k8s GitHub - input-output-hk/nomad-driver-nix: A Nomad task driver for systemd-nspawn

Cool. That may be the general direction I’m thinking.

@senorsmile Nix is rather sitting at the same functional system layer as OCI images:

  • OCI images assemble envrionments for runnables

  • Nix assembles environments for runnables

The difference is that the current OCI image standard works on (127?) layers and resolves path conflicts with an overlay assembly spec, essentially a DSL that can express things like “remove that file” and similar things that higher layers might do to lower layers inside the context of the FSH.

On the other hand, Nix does never produce conflicts on “layers” due to unique (store-)paths and thus can afford to work with a DAG closure (a set rather than a list; order doesn’t matter) instead of (list-y) layers and is also not limited by the 127 layer limit, which iirc was an arbitrary performance-motivated cutoff.


Regarding a potential implicit confusion in this thread: Nix is not NixOS, which, for the purpose of this argument, is “just” a wrapper around systemd.

I assume the quest is how Nix relates to the OCI spec and not how systemd (or any other runtime jailing mechanism) relates to the OCI spec.

nix-snapshotter solves the problem space I envisioned: Nix-snapshotter: Native understanding of Nix packages for containerd

1 Like