It seems like in the default configuration, the sshd in nixos disconnects from clients after a timeout. Usually I would disable that by setting ClientAliveInterval in the sshd.conf. But it seems like there is no such option that can go into the configuration.nix.
Is there a different preferred way to prevent ssh disconnections in NixOS?
I don’t seem to find an entry for ssh or openssh in the wiki, is there a configuration.nix key that lets me set arbitrary sshd.conf entries?
I think there are several reasons, it’s actually pretty hard to achieve a stable connection. I can think of:
OpenSSH client picking a temporary IPv6 address. This has been an issue for 20 years, several patches have been proposed, but I guess the developers don’t care. I use this patch to fix this:
diff --git a/sshconnect.c b/sshconnect.c
index ebecc83..91364e3 100644
--- a/sshconnect.c
+++ b/sshconnect.c
@@ -26,6 +26,7 @@
#include <net/if.h>
#include <netinet/in.h>
#include <arpa/inet.h>
+#include <linux/ipv6.h>
#include <ctype.h>
#include <errno.h>
@@ -370,6 +371,11 @@ ssh_create_socket(struct addrinfo *ai)
if (options.ip_qos_interactive != INT_MAX)
set_sock_tos(sock, options.ip_qos_interactive);
+ if (ai->ai_family == AF_INET6 && options.bind_address == NULL) {
+ int val = IPV6_PREFER_SRC_PUBLIC;
+ setsockopt(sock, IPPROTO_IPV6, IPV6_ADDR_PREFERENCES, &val, sizeof(val));
+ }
+
/* Bind the socket to an alternative local IP address */
if (options.bind_address == NULL && options.bind_interface == NULL)
return sock;
The connection timing out if nothing is being sent over the network for a while. There are two ways to avoid this: TCP keepalive or SSH-level keepalive messages. I prefer the second method because network-equipment can’t mess with it. You can do this with
The connection breaking when roaming between networks. Here’s there’s not much you can do, as SSH relies on TCP which keeps track of connection using the infamous 4-tuple (server addr, server port, client addr, client port), so if any of these changes the connection breaks. Mosh was designed to allow roaming (among other things) by switching to UDP. It works very well, but since it uses a range of UDP ports it’s annoying to set up if you have a NAT44, also it’s more prone to being blocked by pesky corporate firewalls.
Probably more issues I’m not aware of, but that still manage to break the connection sometimes. ISP routes injecting TCP resets?
No idea why, but a reason could be that the client is on windows.
Is there something like service.openssh.extraConfig for the server? Then I could set ClientAliveInterval and ClientAliveCountMax in there. This works for other servers that run basic ubuntu that I have.
Also, I think you’re making a rather large assumption in the premise of this thread. I’ve never experienced this problem so it almost certainly isn’t the default behavior. If you’re experiencing this, I would wager it’s caused by something else entirely.
Great! I have set so far the extraConfig, but then I will change it to the actual variables. I was not aware that other variables are accessible via services.openssh.settings as well.
With services.openssh.extraConfig it worked last night. So my problem is solved, albeit I am very confused as to why using ServerAliveInterval on the client is not enough.