Kubenix: call to rejuvenate

Can we make a joint effort to rejuvenate GitHub - xtruder/kubenix: Kubernetes resource builder using nix ?

It seems like @offlinehacker has shifted priorities, yet this project is too valuable for connecting the nix ecosystem with the k8s ecosystem, for it not to have strong maintainership commitment.

I dare to guess, @offlinehacker would be pleased if anyone is willing to step up and take over ownership of this project, even if only temporary. Interest is temporary. Handing over repositories/projects should be a standard procedure.

Occasionally, this might be a good flagship project to increase a nix-focused company’s business profile, wouldn’t it?


What I’m doing / planning: I’ve started to merge some PRs on my privat fork and plan to effectively “rejuvenate” kubenix. I defer to that fork’s issues for some of the todo lists (please open issues to share your ideas you might have). I also want to reach a state of native kubenix support within divnix/devos (cc @colemickens :smiley: ).

Where I stand: I have explored kubernetes indepth last year, but was never able to harmonize it properly with nix. I hope kubenix can help mitigate this problem.

What I want: While I have a few bits of useful domain knowledge about k8s, I could really need some help with the nix bits, which I’m less familiar with and which likely slow me down (or might even stall me).

Why it’s important: Kubernetes ia a great configuration database in the public domain. The overlap with what is deployable with nix is also not so big. Therefore, k8s is an ideal complement for us nixers. It also might introduce some high profile users of the k8s ecosystem to ours and conduct some cross-pollination.

/cc @adrian-gierakowski @GTrunSec @zimbatm

It’s nice to see you making some progress on this. It’s a pity @offlinehacker has gone silent and has not been replying to any issues on github.

+1 for integrating kustomize. It would help migrating existing projects which are currently using it.

personally I’d find these useful:

  1. hashed ConfigMap/Secret names (like done by kustomize or spinnaker), similar in spirit to Name configMaps after a path in Nix store, just likeyou do for docker images · Issue #14 · xtruder/kubenix · GitHub
  2. loading of CRD schemas from yaml files (Loading custom resource definitions from yaml · Issue #19 · xtruder/kubenix · GitHub)
  3. a cli converter which takes a k8s yaml file and spits out nix code, which could then be used as a starting point for kubenix module config (I know kubenix can simply import yaml files on the fly, but if you want to fully migrate to kubenix then there should be an easy path to convert all your yaml files to nix to allow further editing/refactoring).

I’ve got a working implementation of 1, which basically finds all ConfigMaps\Secrets in kubenix’s output and replaces references the their names with $name-$hash-of-data. It only does it for resources/paths which where useful to me at the time I implemented this, however it should be trivial to add support for all possible places in which a ConfigMap/Secret ref can appear.

I’ve also started on 2 and 3, but they’d require a bit more work.

I might be able to work on the above in about 2 weeks or so.

1 Like

If it helps anyone in the decision to engage / invest [time], I have upstream-k8s interop extremely high in my priorities.

First of all, thank you for your previous work,

I only have two things on my main expectation list.

  1. nickel support (cheap language)
  2. also, integrating kustomize
  3. Better testing experience

Based on my lack of k8s experience, I have no more comments in here. But I am willing to contribute and participate in a future project.

1 Like

I advanced a little on this. Now, it’s possible to access the cluster via the absolutely marvelous k9s tool. Well, at least the non-k3s based clusters.

I’m now trying to finish off the unfinished test refactoring which holds a very interesting implementation of kubetest pyothon tests. Since nixpkgs has already made the call for python, I don’t mind making that, too. I’d have had objections otherwise, I guess.

I still feel a little lonely in my endeavor. And I know, I do not posess all the skills needed. Can we do something about that? :wink:

1 Like

During test refactoring, I stumbled on the need to in-depth analyze how nixos tests themselves are actually set up. I opted to adopt the upstream k8s testing framework. However, I do not understand how to correctly plot flannel against plain CNI, which I enabled specifically as this has been in use by kubenix before. Can anyone help me make sense about flannel vs cni bridge in this context?

It does almost appear as if nixos k8s-flannel integration uses CNI, at all. Is that correct?

Nice context: Comparing Kubernetes CNI Providers: Flannel, Calico, Canal, and Weave | SUSE Communities

I’d also like to take the opportunity and thank @tomberek for his generous provision of office hours last Saturday, which helped debug some parts and deepen my understanding of the test refactoring, and without his help, I probably wouldn’t be able to complete this. :smiley:

1 Like

The k9s tool doesn’t work with k3s?

Correct, I wasn’t able to craft a single kubconfig that would work with both, k3s and nixos-k8s, since k3s does inline certificates and nixos-k8s does reference them on the file system.

The good news is, that the test refactoring introduces a local runtime, that very bluntly interacts with any k8s api + registry of your choosing (and since the kubeapi is exposed also with k9s):


    set -e


    while (( "$#" )); do
      case "$1" in
          shift 2
          shift 2

    echo "--> copying docker images to registry"
    ${testing.docker.copyScript} $SKOPEOARGS

    echo "--> running tests"
    ${testing.testScript} --kube-config=$KUBECONFIG

So, I guess the way forward to run tests on k3s, would be to use k3d + local runtime.

1 Like

I just happen to realize and would put this forth for public consideration:

I think all users of kubenix, present & future, could benefit from a solution to NixOS VM Test Driver filter logs (both issues). Those are at the root, why nixosvm-k8s tests are basically “undebuggable”, as far as I can tell. Sure, k9s works around that, but it would be nice to have a little better driver dashboard for vm tests, anyways.

Sorry for being verbose, here: I try to convert this into kind of an interactive thing and an opportunity to work together in a small focus group of interested parties on rejuvenating kubenix.
That model has worked absolutely impressive, as some of you know, with divnix/devos.
I’d be happy if we can replicate it here and gather 2-4 people with different expertise around this table.

Hosted by Flying Circus.