point k3s to a Github repo that holds the cluster yaml’s and have it update on changes
I don’t quite get this point. Do you mean GHA deployment on push? Also ArgoCD has integration for it
This is what we have in nix-tooblox. Usage and here is implementation
One would either create an action that does something similar on push or an action that calls this particular action, which tells it to apply certain deployments. Overall, it’s quite simple no matter the tool. Also, I should mention that there is an incomplete branch with fancier GHA.
have sealed secrets in GitHub and have them turned to kubernetes secrets (ideally even manage the secrets inside services like databases)
GH Secrets to deployment secrets does not sound nice, consider SOPS but then again its annoying to implement it either way. Tho for a while, I wanted to add SOPS support to nix-toolbox, let me know if you end up using it, this may give me an extra push to do it
We currently deploy K8s secrets using terraform from sops files, it seems to work nicely but I admit it would be annoying if we had a lot of them.
FYI: most operators take care of secrets themselves, for instance, if you need a Postgres cluster - avoid manually spinning it up yourself, instead consider using Zalando postgres operator
It’s basically just an GitOps workflow. I know ArgoCD and FluxCD are options.
Argo feels pretty heavy. Flux looks a little more interesting.
Right now we are using Comin for NixOS and have similar setup for k3s that just pulls and runs kubectl. Very lightweight. What is still missing are helm chart and a proper secrets management.
I’ll have a look.
Sorry if that was misleading. With sealed secrets on GH I meant e.g. SOPS secrets in the repo.
GH secrets sounds like a really bad idea
Another path I was looking into was external secrets management (via ESO for example).
But I could not find any external secrets manager that supports E2EE. 1password might.
How come you are using terraform for that the secrets?
I am intrigued.
Our test cluster has is all defined manually for now. But I would love to switch to something more self-contained. Via helm that might be easier to maintain.
That’s also why I asked the question here
Would be nice to declare the helm charts in the nix config.
You are using the Zalando postgres operator? Are you happy?
According to this test I was thinking to look into CNPG.
I am not a big fan of GHA but I am slightly against other ci/cd solutions since it involves extra management and GHA is so common that everyone understands
How come you are using terraform for that the secrets?
I am intrigued.
We run EKS, and we have basically 2 terraform scripts. One setups the hardware, and the other one sets up k8s (like installing all the operators, configuring ACM, EBS, Route53, etc). Tf just feels like the right place to put them
It would be nicer if we had nix-toolbox integration for sops so we would not need to explicitly define namespaces in tf. But it does not really cause any issues since we aren’t creating new namespaces often
You are using the Zalando postgres operator? Are you happy?
It just gets the job done, that’s all there is to it. The only complaint from my side was earlier this year when I was waiting for one feature to be released, it was in master but they do releases like twice a year…
I haven’t really looked into other operators since this one seem to be the most popular one, but I would consider using percona operator if it has pmm integration. Other than that I don’t think that its worth the effort to switch
NixOS’s K3s has the service options services.k3s.manifests and services.k3s.charts that can be used for bootstrapping the cluster. You might run on some difficulties when trying to use it, but it’s possible to use it.
On my experience, both FluxCD (for GitOps, automatically syncing yaml manifests to the K3s cluster) and SOPS (for Nix & K8s) works well.
I run a cluster of NixOS hosts and a K3s cluster. I use only Nix to provision the NixOS hosts cluster and K3s cluster. Nix has all the hosts information, which also dynamically generates Ansible inventary [to be used in eventual upkeep like triggering custom tasks]. Nix config injects whatever information is necessary for starting up the K3s cluster. No need of Terraform on my setup. Credentials is kept on SOPS for Nix and K8s. Ansible is used for triggering actions to a set of hosts.
End result is I can destroy cluster and bootstrap cluster automatically. And FluxCD is bootstrapped and all manifests get synced automatically. Nix can go a long way before needing something else.
Would be nice to declare the helm charts in the nix config.
This is possible with the k3s module, however, you need to create the manifest for the k3s Helm controller manually. For example to deploy the bitnami nginx chart:
services.k3s = {
enable = true;
manifests.nginx.content = {
apiVersion = "helm.cattle.io/v1";
kind = "HelmChart";
metadata = {
name = "nginx";
namespace = "kube-system";
};
spec = {
targetNamespace = "test";
createNamespace = true;
repo = "https://charts.bitnami.com/bitnami";
chart = "nginx";
version = "18.3.5";
# configure the chart values like you would do in values.yaml
valuesContent = ''
replicaCount: 3
tls:
enabled: false
metrics:
enabled: true
'';
};
};
};
The deployment can be updated by changing the configuration accordingly and doing nixos-rebuild, but be aware that removing the manifest from your config and rebuilding the system will not uninstall the chart. You need to have the chart in your configuration and additionally set services.k3s.extraFlags = [ "--disable nginx" ] to actively uninstall the chart.
NB: The above will download the chart at runtime. You can also prefetch the chart during build and place it on the filesystem so that k3s can pick it up:
services.k3s =
let
nginxChart =
pkgs.runCommand "nginx-chart"
{
nativeBuildInputs = with pkgs; [
kubernetes-helm
cacert
];
outputHashAlgo = "sha256";
outputHash = "sha256-e4zlCaK9mioU9A0Wr2YqCxwzlnS+ssPG46ixvFVXOqk=";
}
''
export HOME="$PWD"
helm repo add repository https://charts.bitnami.com/bitnami
helm pull repository/nginx --version 18.3.5
mv ./*.tgz $out
'';
in
{
enable = true;
charts.bitnamiNginx = nginxChart;
manifests.nginx.content = {
apiVersion = "helm.cattle.io/v1";
kind = "HelmChart";
metadata = {
name = "nginx";
namespace = "kube-system";
};
spec = {
targetNamespace = "test";
createNamespace = true;
# the chart name (bitnamiNginx) has to match the key that is used in services.k3s.charts
chart = "https://%{KUBERNETES_API}%/static/charts/bitnamiNginx.tgz";
valuesContent = ''
replicaCount: 3
tls:
enabled: false
metrics:
enabled: true
'';
};
};
};
Similarly, you could prefetch container images and place them in services.k3s.images for k3s to pick them up, so you only need to download things at build time.
Edit: Be careful to not place secrets directly in valuesContent as they will show up in the Nix store
I finally got around to play with this a bit more. I am trying to get FluxCD installed.
But I am still a little lost on how to configure it - especially in regards to SOPS support.
When I use manifest all checks pass - but nothing gets installed.
When I use services.k3s.autoDeployCharts it seems that option is no longer available?
When trying services.k3s.charts it seems it needs different settings.
Does anyone have a pointer to actually working helm charts that are installed via nix?
I just need this one. After that, flux can take over with the kubernetes resources. Not that I am super eager on installing flux via helm chart - but it seems to be the most declarative way.
I also just found the flux-operator package in unstable but I have no clue how that should work.
services.k3s.manifests lets you configure kubernetes manifests in your Nix config. It converts the manifests to YAML and applies them in a manner similar to kubectl apply. For example, you can create a namespace with the following config
If nothing happens for you, search for the manifest name (test-ns in the example above) in the k3s logs (journalctl -u k3s). It will show you if there is a problem with the manifest.
services.k3s.charts isn’t useful on its own, it just makes charts accessible from inside the cluster.