For example, in a fresh installation, there is no virtualization.docker section in my configuration, hence I do not have Docker service/binary/configuration.
Then I add virtualization.docker.enable = true; in my configuration, I get everything.
Now, I try removing Docker by setting the enable to false, or just remove the line from configuration. Seems like those two give different results. Binaries are gone but seems like there are some configurations left and still affecting other related tools.
To elaborate on this a little bit, the default value is false, so explicitly setting it to false yourself vs. letting the default value be false is essentially the same - unless there is something else in your config trying to set it to true, then you get an eval error if you also set it to false with the same priority.
Do note that anything that isn’t managed via your NixOS config, e.g. any kind of app state, will not be removed by disabling docker or anything else. But I’m also curious what you mean here.
I have met two cases. Sorry I cannot give exact code, those happened in the past.
Enable LightDM, then removed the section and enable SDDM. Some DE setup script and X session file were still using LightDM and led to errors. Set LightDM enable to false fixed.
Enable Docker, then removed the section and enable Containerd. Iptables settings of Docker were still there and caused Containerd behaved weirdly. I am not sure if setting Docker enable to false would fix since I purged iptables manually. I do not know if “stop and remove all containers before removing Docker” would fix this.
That’s because lightdm is set to true in some cases/modules IIRC. But it’s odd that enabling SDDM didn’t help, some specific code would be helpful.
I’d highly recommend using git to version control your configuration going forward. You don’t need to publish it but using git has so many advantages besides publishing, including being able to look at old revisions.
That shouldn’t happen. Disabling docker should shut down its systemd service and that should in-turn remove any firewall rules.
If you can reproduce this, please open an issue in Nixpkgs with the repro steps.
It wouldn’t have. As mentioned, it has the exact same effect.
Ah, right docker is weird like that and its containers keep running when the daemon is stopped or something. If we don’t handle the case where docker becomes disabled but containers are still running, we should.
Just about Docker, and Containerd. Did a little test.
After removed all containers (so no port mapping or other network conf needed), disable Containerd. The bridge and relative iptables rules created by Containerd are still there. I guess in this case, nixpkg does not manage the “system-level” resources created by packages.
Definitely do create an issue on that. It’s useful to have a collection point for defining the problem such that someone who is sufficiently annoyed at it knows how to fix it.