Today, when rebuilding and updating a server, I’ve encountered a dramatic slow ssh file transfer of the following file: ‘linux-firmware-20251111-zstd’.
Both systems, sender and receiver are connected via a 1Giga switch - it should be a fast transfer, right? Indeed, ‘nethogs’ showed me all seemed good on the network side…
Turns out the transfer was not the problem, and after some digging and googling the culprit seemed to be the receiver decompressing the file “on the fly” - that was the boottleneck.
I run ‘top’ on the receiver and it showed a service running at 100% cpu. As soon as I’ve stopped it the file transfer went smooth - the cpu could do the decompression at a sane pace.
And here is where I need help! For today the problem is solved, but is it avoidable?
The receiver is an old 2016 repurposed machine, maxed out on memory, for what it is worth. It mainly hold backpus and host some homelab stuff. If a server is on the cloud, with limited resources, what would be the best approach?
Can I limit the cpu usage of the system overall, and leave some breath room for the update process? How could I set up the receiver, so that it reserves, say, 10% of the CPU for the ssh transfer/decompression. Is this even the right approach to take?
Is it better not to send zstd compressed files to avoid the overhead of the decompression? Is it something that can be done? And is it configurable on nixos?
Am I missing some obvious flag on my nixos-rebuild that will get me shouted at? ![]()
I’m open for suggestions! Thank you in advance!