Declaring k3s with helm and cluster

What I have:

  • a nixOS server running k3s via flake

What (I think) I want:

  • declare helm charts within the flake
  • point k3s to a Github repo that holds the cluster yaml’s and have it update on changes
  • have sealed secrets in Github and have them turned to kubernetes secrets (ideally even manage the secrets inside services like databases)

So far I’ve collected some links,

Kubernetes on Nix Community Organization - #15 by superherointj
GitHub - hall/kubenix: Kubernetes management with Nix
GitHub - DevPalace/nix-toolbox
Helm and Helmfile - NixOS Wiki
helmfile - MyNixOS

but I am wondering if someone else is using a similar setup already and could share some recommendations or insights.

1 Like
  • point k3s to a Github repo that holds the cluster yaml’s and have it update on changes

I don’t quite get this point. Do you mean GHA deployment on push? Also ArgoCD has integration for it
This is what we have in nix-tooblox. Usage and here is implementation

One would either create an action that does something similar on push or an action that calls this particular action, which tells it to apply certain deployments. Overall, it’s quite simple no matter the tool. Also, I should mention that there is an incomplete branch with fancier GHA.

  • have sealed secrets in GitHub and have them turned to kubernetes secrets (ideally even manage the secrets inside services like databases)

GH Secrets to deployment secrets does not sound nice, consider SOPS but then again its annoying to implement it either way. Tho for a while, I wanted to add SOPS support to nix-toolbox, let me know if you end up using it, this may give me an extra push to do it

We currently deploy K8s secrets using terraform from sops files, it seems to work nicely but I admit it would be annoying if we had a lot of them.

FYI: most operators take care of secrets themselves, for instance, if you need a Postgres cluster - avoid manually spinning it up yourself, instead consider using Zalando postgres operator

It’s basically just an GitOps workflow. I know ArgoCD and FluxCD are options.
Argo feels pretty heavy. Flux looks a little more interesting.

Right now we are using Comin for NixOS and have similar setup for k3s that just pulls and runs kubectl. Very lightweight. What is still missing are helm chart and a proper secrets management.

I’ll have a look.

Sorry if that was misleading. With sealed secrets on GH I meant e.g. SOPS secrets in the repo.
GH secrets sounds like a really bad idea :slight_smile:

Another path I was looking into was external secrets management (via ESO for example).
But I could not find any external secrets manager that supports E2EE. 1password might.

How come you are using terraform for that the secrets?
I am intrigued.

Our test cluster has is all defined manually for now. But I would love to switch to something more self-contained. Via helm that might be easier to maintain.

That’s also why I asked the question here :slight_smile:
Would be nice to declare the helm charts in the nix config.

You are using the Zalando postgres operator? Are you happy?
According to this test I was thinking to look into CNPG.

I am not a big fan of GHA but I am slightly against other ci/cd solutions since it involves extra management and GHA is so common that everyone understands

How come you are using terraform for that the secrets?
I am intrigued.

We run EKS, and we have basically 2 terraform scripts. One setups the hardware, and the other one sets up k8s (like installing all the operators, configuring ACM, EBS, Route53, etc). Tf just feels like the right place to put them

Here is example of creating k8s secret using tf:

resource "kubernetes_secret" "blockfrost" {
  for_each   = toset(var.namespaces)
  depends_on = [kubernetes_namespace.this]

  metadata {
    name      = "blockfrost"
    namespace = each.value
  }

  data = {
    "api-key" = jsondecode(data.sops_file.blockfrost.raw)[each.value]
  }

}

It would be nicer if we had nix-toolbox integration for sops so we would not need to explicitly define namespaces in tf. But it does not really cause any issues since we aren’t creating new namespaces often

You are using the Zalando postgres operator? Are you happy?

It just gets the job done, that’s all there is to it. The only complaint from my side was earlier this year when I was waiting for one feature to be released, it was in master but they do releases like twice a year…

I haven’t really looked into other operators since this one seem to be the most popular one, but I would consider using percona operator if it has pmm integration. Other than that I don’t think that its worth the effort to switch