Installing k3s disables firewall port range unexpectedly

I’ve started playing around with k3s on my local machine today. It’s pretty straightforward and I managed to make a svelte app using the node adapter run in a deployment (it’s just a web server listening on port 3000). Then I’ve created a NodePort service that exposes that port 3000 on port 30005 on my localhost. Now I can reach my app under localhost:30005 as expected.

What I did not expect (and do not want without extra config) is that I can also reach that app under http://my-nixos-hostname:30005 from any device in my local network. That’s not cool, because I did not open that port in my firewall, and the config example for k3s suggests that if port opening is desired it should be done manually because the first snippet lists the line networking.firewall.allowedTCPPorts: k3s docs.

Other ports, like e.g. 5173, which is used when I run the app in dev mode outside of containerization remain inaccessible from other machines, like I’d expect.

It feels like simply enabling k3s in the config also opens the firewall for the port range that k3s uses for its connections. This is not documented, not expected (on my part) and a quick look into the package definition did not allow me to glean where/how it happens.

Can somebody confirm that this is indeed the case, comment on whether that is desired behavior, and possibly point me to where in nixpkgs that’s actually set up and how I might counteract it?

Cheers

— Edit:
It’s not actually opening the entire port range. A running a service on port 30006 is not accessible from the local network. Apparently only the port specified in the Kubernetes Service is accessible. Which is even more strange, because that means that a K3s internal configuration is affecting my machine’s firewall. Should it be that way?

– Edit 2:
A similar issue seems to be known for docker and podman. This is really not cool. Any workarounds?

1 Like

I haven’t tried this yet, but I was wondering if it would help to run the K3S cluster using Wireguard via the wireguard-native Flannel backend? As described here.

Shoot, just signed up to say I have this exact problem with my self-hosted k3s cluster.
Just wanted to create a “secure” backyard cluster to test out stuff, but it’s really strange that it overlooks UFW rules.
I’m going to dive deeper than that, see what UFW does, and how Treafik is being affected by that.

This guy also has the same problem: K3S and IPTables Woes: How to IP whitelist on firewall level? - k3s, k3OS, and k3d - Rancher Labs

Apparently only the port specified in the Kubernetes Service is accessible. Which is even more strange, because that means that a K3s internal configuration is affecting my machine’s firewall. Should it be that way?

I observed the same behavior. The problem is that k3s (more specifically kube-proxy) creates iptables rules that bypass the firewall configuration and allow traffic to blocked ports. For instance, when I create a service of type LoadBalancer with port 9113, I can access the service although the port is blocked in my firewall. I think this is “working as expected” and unfortunately how kubernetes routing works. After creating the service, I can grep for the port in iptables rules and the comments indicate that this is desired.

iptables -L | grep 9113
MARK       tcp  --  anywhere             anywhere             /* rule to ACCEPT traffic from all sources to dest pods selected by policy name: nginx namespace test */ match-set KUBE-DST-TY2DQHK4KTYH26VJ dst tcp dpt:9113 MARK or 0x10000
RETURN     tcp  --  anywhere             anywhere             /* rule to ACCEPT traffic from all sources to dest pods selected by policy name: nginx namespace test */ match-set KUBE-DST-TY2DQHK4KTYH26VJ dst tcp dpt:9113 mark match 0x10000/0x10000

Maybe kube-proxy can be configured to not expose services in that way, but you need to be careful to not break inter-pod or inter-node communication.

Did you ever figure out the proper way to stop k3s from overriding external firewall rules?

I don’t want k3s / traefik / kube to override my firewalld rules.