Okay, this is a question + solution in case others run into the issue. I had an infuriating problem where libvirt guest KVM VMs couldn’t assign a real ipv6 address (only link local). I struggled to track it down because I don’t believe I was flushing all iptables rules consistently so the issue was intermittent (I’ve now switched to nftables where that is simpler). A useful diagnostic tool was the rdisc6
command from nixpkgs#ndisc6
which indicated that router solicitations failed with the docker rules enabled:
From guest:
sudo rdisc6 -1 <ethernet interface>
resulting in “timed out” messages.
It turns out this is a common issue with docker where it blocks forwarding outside of its own bridge interface. The main workaround on the arch wiki where docker is set to use this bridge wasn’t an option because I’m using the bridge as my host interface (so it has IPs assigned from an upstream router).
Anyway, the workaround with nftables:
networking.nftables.enable = true;
networking.firewall = {
filterForward = true;
extraForwardRules = "iifname br0 accept";
};
where br0 is the host bridge interface.
Why this only affected ipv6 and not ipv4 I cannot say, given the other discussions I found indicated that both should be broken.
EDIT: One issue is that if docker is restarted its own rules seem to take priority again until I also restart nftables.
EDIT 2: Okay, the workaround above doesn’t solve the problem. Restarting nftables clears all of the iptables compatibility rules created by docker. I’m not sure if my workaround is incorrect or if the docker created rules are taking priority over the nft rule.