Repeating the talk’s title “sneaking in nix” - would it be supported through buildkit a majority section of the container ecosystem would gain immediate access to the advantages of nix builds over some more dumb container specifications, such as dockerfiles. (watch the talk!)
@grahamc What do you think? Is this possible and worth trying?
You may also be interested in the talk I gave about Nixery, which goes into some detail about the layering strategy used (and why it’s better than what Docker & friends do).
My current plan (barring time constraints, such as working on my other projects, or working on my actual job ) is to extract the Nixery functionality into a CLI tool. (See nixery#73).
There are also ways of significantly reducing the dependency on the actual Nix binary (also a good idea for security reasons), if we restrict this to packages that have already been built - that would make it much easier to integrate into other systems, too.
One other change that stands in the way of this is moving the generation of manifests and other metadata to ggcr.
This probably isn’t quite on-topic. I don’t have the best handle on the tendrils of the virtualization ecosystem, but I spent some time recently on a frustrating search for any sane cross-platform abstraction layer I could use to sandbox a simple shell-script ~fuzzing tool.
I’d roughly prefer not to use Docker, but the gravity of anything usable cross-platform kept pulling me back there. At one point I stumbled on an interesting project (https://github.com/ottomatica/slim) for converting Docker containers into a VM that can supposedly run on hyperkit, kvm, and virtualbox. Here’s the description:
slim will build a micro-vm from a Dockerfile. Slim works by building and extracting a rootfs from a Dockerfile, and then merging that filesystem with a small minimal kernel that runs in RAM.
This results in a real VM that can boot instantly, while using very limited resources. If done properly, slim can allow you to design and build immutable unikernels for running services, or build tiny and embedded development environments.
It seemed like a dead-end for my case (on first read I assumed this was an arbitrary Docker container–but after getting an example working it seems that there are some specific demands on the container/Dockerfile), but it still felt like a novel/interesting approach so I’ve been keeping my eyes peeled for chances to mention it…
With Ignite, you pick an OCI-compliant image (Docker image) that you want to run as a VM, and then just execute ignite run instead of docker run . There’s no need to use VM-specific tools to build .vdi , .vmdk , or .qcow2 images, just do a docker build from any base image you want (e.g. ubuntu:18.04 from Docker Hub), and add your preferred contents.
@tazjin I’m catching up with the directions your pointing at, thanks!
In my eyes, it looks like moby/buildkit required not only plugable frontends but plugable builders, too. I wouldn’t want to give up on the possibility to build custom stuff.
I wonder if how buildkit’s llb and nix’s drv compare conceptually - might look into that on occasion.
As for caching, do you avoid in nixery the problem of buildLayeredImage, that although it caches the nix builds, it does not cache the created tar resulting in rebuilds / reloads like this:
As for caching, do you avoid in nixery the problem of buildLayeredImage , that although it caches the nix builds, it does not cache the created tar resulting in rebuilds / reloads like this
Yes, Nixery caches entire builds - that is if the build is eligible for caching .
Or maybe it’s enough/better to implement nixcnixd , a runner daemon (or rather containerd plugin!) that skaffolds containers upon reception of a manifest from something like ipfs-based nix store. See Nix, Containerization, and SquashFS - with some crfs features. One step further than what nixery does - just throwing in loose ideas - but since k8s is seeing RuntimeClass - why not take the shortcut? - would solve most of the interesting use cases in kind of a nix-native way. and smells a ton like “innovation”. CI output could be just used as “registry”. - Had I only the skills to do it
I just wonder what would have to go into the image: field (or instead of it?). A handle to a CRD-induced nix build which nixd knows how to intercept?
Sure, if buildkit does the nix builds instead of nix itself that woudn’t hurt - as it would have previsously forced the nix instantiate and build implementations to become more composable.
EDIT: CRI protobuf api allows arbitrary metadata (read nix derivations or expressions) in the imageSpec. Checkpot!
Very interesting, looks like nix-cri-imageservice did not come into existence, though. From the chat logs:
02:47 <dhess> shlevy: will this nix-cri-imageservice (grab what’s needed from the binary cache|build it) on demand? 02:47 <shlevy> The design I had in mind was grab from the binary cache 02:48 <dhess> sounds cool 02:48 <shlevy> In principle though you could have it build on demand 02:48 <shlevy> But I doubt that’s really desirable 02:48 <dhess> probably not 02:48 <shlevy> It would fetch from the cache when kubelet pulls the relevant image, and delete the gc root when kubelet deltes the image
Probably a better idea would be to make the existence of a SAN-based nix store a collateral of the CI process, so that the shim only composes the root filesystem on demand.
Sorry, I should’ve mentioned – I reached out to Shea – in case Shea doesn’t see your ping, or doesn’t want to repeat themselves:
from the brief email thread, it sounds like it didn’t make it past the idea phase. The gist was an ImageService implementation/wrapper that ensures nix store paths are realized in the store and gcrooted for as long as ImageSvc needs it. And then either images that request the Nix store be mounted, or a wrapper runtime service that injects just the closure paths.