Apparently only the port specified in the Kubernetes Service is accessible. Which is even more strange, because that means that a K3s internal configuration is affecting my machine’s firewall. Should it be that way?
I observed the same behavior. The problem is that k3s (more specifically kube-proxy) creates iptables rules that bypass the firewall configuration and allow traffic to blocked ports. For instance, when I create a service of type LoadBalancer
with port 9113
, I can access the service although the port is blocked in my firewall. I think this is “working as expected” and unfortunately how kubernetes routing works. After creating the service, I can grep for the port in iptables rules and the comments indicate that this is desired.
iptables -L | grep 9113
MARK tcp -- anywhere anywhere /* rule to ACCEPT traffic from all sources to dest pods selected by policy name: nginx namespace test */ match-set KUBE-DST-TY2DQHK4KTYH26VJ dst tcp dpt:9113 MARK or 0x10000
RETURN tcp -- anywhere anywhere /* rule to ACCEPT traffic from all sources to dest pods selected by policy name: nginx namespace test */ match-set KUBE-DST-TY2DQHK4KTYH26VJ dst tcp dpt:9113 mark match 0x10000/0x10000
Maybe kube-proxy can be configured to not expose services in that way, but you need to be careful to not break inter-pod or inter-node communication.