How can i see if my system is using my swap partition?

Operating System: NixOS 24.05
KDE Plasma Version: 6.0.5
KDE Frameworks Version: 6.2.0
Qt Version: 6.7.1
Kernel Version: 6.6.35 (64-bit)
Graphics Platform: Wayland
Processors: 16 × AMD Ryzen 7 3700X 8-Core Processor
Memory: 15.6 GiB of RAM
Graphics Processor: NVIDIA GeForce RTX 2060/PCIe/SSE2

upon looking at hardware-configuration.nix i realized that swapDevices = [ ]; is empty even though i have swap paritions on both of my drives. is nix using said swap partitions? if not how can i tell it to use them? or instead should i create a swap-file https://nixos.wiki/wiki/Swap like the wiki says and just remove the partitions that have been made?


let me know if you need any more information about my computer.
any help is appreciated!

lsblk will write [SWAP] next to the partition if it’s on:

NAME                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
nvme0n1                    259:1    0 931.5G  0 disk
└─nvme0n1p2                259:3    0   931G  0 part
  └─root                   254:0    0   931G  0 crypt
    └─main-swap            254:8    0    32G  0 lvm   [SWAP]

Flipping it on if it’s not working isn’t particularly hard, just add the disk to the list: dotfiles/nixos-config/hosts/yui/hardware-configuration.nix at e4c24b0eb620628cfad95f7e73694856eed51488 · TLATER/dotfiles · GitHub

Normally nixos-generate-config should pick the partition up on installation and put it in hardware-configuration.nix, but the partition would need to be active for that. Rather than manually setting it, you can also just enable the swap partition and try to regenerate that:

# swapon /dev/nvme0n1p2
# swapon /dev/sdb1
$ nix-shell -p nixos-install-tools
$ nixos-generate-config --show-hardware-config
1 Like

sorry for the late response.
this was it! thank you again TLATER!
that was super easy to turn it on your the best!

I see the following in NixOS 23.11, which says to me swap is enabled, even if it is not really being used.

$ free
               total        used        free      shared  buff/cache   available
Mem:        65695032     6250452    52367984     3907120     7076596    54903484
Swap:      150994940           0   150994940

also

$ cat /proc/swaps
Filename				Type		Size		Used		Priority
/dev/dm-1                               partition	150994940	0		-2

also

$ cat /proc/meminfo | grep Swap
SwapCached:            0 kB
SwapTotal:      150994940 kB
SwapFree:       150994940 kB

You have a whopping 64GB of RAM, barely 10% of which is even being used, why would your system be relying on your swap partition?

You might see the kernel swap to disk very occasionally, AIUI the kernel might swap out shared libraries to free dirty pages over evicting them completely since it’s faster to read back from swap than to read files from FS again, but as long as you’re not getting close to having filled out the actual memory over the current uptime the kernel has no reason to evict pages, and as such no reason to swap - swap space is much slower than RAM after all.

1 Like

Yes, 64GB! :laughing: I often run lots of VMs (simulations) but clearly not when I did the captures. I wasn’t sure if @KeifferMonster also wanted a method to ‘see what swap the system was using’.

Personally I have only recently replaced my old laptop with a new one. The old machine with 32Gb of ram and dual spinning disks is still going strong (touch the proverbial wood).

Swap seems to be an area I need to review with the increases in available memory and much faster disk space. I recently did a small internal test of memory speed vs PCIE gen 4 NVME speed and there was still a significant difference, but I confess that when it comes to swap and partitioning I did operate on auto-pilot and allocate (2.25 x memory) as a swap partition, based on using hibernation and trying to run lots of VMs. It’s a bit of a case of not wanting to hit a limit during run time that is easily avoided with a bit of disk-space. Trying to look at how much swap to allocate is interesting as there are so many different use cases - servers to laptops, CPU bound tasks vs memory bound tasks, operating systems, physical specs and I find most ‘guides’ are either too basic or too technical for me to grasp without spending a lot of time. Again the easy answer is to just allocate that disk space… I sometimes wonder though if the availability of swap means that an OS will use it even when not required. Ah, I see a rabbit hole looming… escape while you still can! ha ha ha

Re your point, I wonder how much difference there is reading back from swap as opposed to reading from the File System. With the change to SSD there doesn’t seem to be significantly faster areas of the disk unlike spinning disks, or massive differences in keeping files contiguous. There also doesn’t seem to be much advantage in using a separate physical disk especially with the limited number of Gen4 type connections. :man_shrugging:

1 Like

Running swapon seems to indicate what partition my system is using for swap!

2 Likes

Cool, I’d never tried that. :+1:

$ swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition 144G   0B   -2

I pretty much feel the opposite way about this. Even with fast NVMEs hitting swap can slow systems down to a relative crawl, and I’d rather my system crash than wear out disks by accidentally churning swap.

OTOH, maybe that’s just because the last time I hit a near-OOM scenario I was still running spinning rust, when that still meant effectively freezing up for a normal desktop. It’s hard to use as much memory as systems typically have these days, even with VMs.

I usually run memory == swap space, since even when hibernating it’s incredibly unlikely to actually need that much space for the image, and I don’t have systems with a lot of drives anymore, so storage paradoxically isn’t dirt cheap. I don’t even actually use the hibernation feature in practice. Opinions clearly differ!

I think it’s not just file access speed, but decompression too? That said, I’m not very familiar with the decisions around memory management in the kernel, it’s possible that’s just archaic/a feature specifically for the systems still running HDDs. All I know is that I’ve observed swap use on systems with ample memory, and this was the explanation I’ve found for it.

Anyway, memory management doctrine in the kernel is a completely separate story, let me stop derailing this thread further :wink:

Coincidentally, I walked away from this computer and had forgotten to plug-it in, came back to find it hibernated but with a very low battery. Plugged it in, gave it a few minutes and it restored with 4 VMs running. :smiley:

This topic is very interesting to me. I have no idea what the real-world MTBF of modern NVME drives is. I guess there are also time costs to slowing down and crashing! So many variables, I decided to ask a couple of AI engines to see what they would say. The results are below and they were pretty much a list the things we had discussed. Perplexity did recommend something called zswap which I have never heard of. I also like the idea of using a big swap partition containing a swap file so it is easy to adjust the size but still keeps the swap all in one place.

Thanks for the discussion. :+1:

Question

I have a 2023 laptop with 64Gb of 3200Mhz DDR4 memory, a PCIe Gen 4 2TB Samsung 980 Pro SSD and an Intel i7-1260P CPU. The laptop has no dedicated GPU. I run NixOS with QEMU/KVM/libvirt and regularly 12 or more virtual machines mostly with 3Gb of memory running a variety of Operating Systems, including various Linux distributions and up to 2 Microsoft Windows domains. Can you please provide me an analysis of the pros and cons of various swap space configurations for the host, taking into account performance of the whole system, risk of crashes and impacts to the lifespan of the NVME drive? Can you then provide a recommendation of how the swap space should be configured?

perplexity

Based on your system configuration and usage, here’s an analysis of various swap space configurations and a recommendation:

  1. Pros of larger swap space:
  • Increased ability to handle memory-intensive workloads, especially with multiple VMs running
  • Reduced risk of system crashes due to out-of-memory errors
  • Better support for hibernation, if needed
  1. Cons of larger swap space:
  • Increased wear on the NVMe SSD due to more frequent write operations
  • Potential performance degradation if the system relies too heavily on swap instead of RAM
  1. Pros of smaller swap space:
  • Reduced wear on the NVMe SSD
  • Potentially faster performance by forcing the system to use RAM more efficiently
  1. Cons of smaller swap space:
  • Increased risk of system crashes or VM failures if memory demands exceed available RAM and swap
  • Limited ability to handle unexpected memory spikes

Recommendation: Given your high-performance NVMe SSD (Samsung 980 Pro PCIe Gen4) and substantial 64GB of RAM, I recommend the following swap configuration:

  1. Set the swap space to 32GB (half of your physical RAM). This provides a good balance between having enough swap for memory-intensive operations and minimizing unnecessary SSD wear
  2. Adjust the swappiness value to a lower setting, around 10-20 (default is 60). This will encourage the system to use RAM more aggressively before resorting to swap, which is beneficial given your large RAM capacity
  3. Enable zswap, a compressed RAM cache that can help reduce the frequency of writes to the SSD while still providing swap functionality
  4. Consider using a swap file instead of a swap partition. This allows for easier resizing if needed in the future
  5. Place the swap file on a separate NVMe namespace or partition to minimize interference with the host OS and VM storage operations

This configuration should provide a good balance between performance, stability, and SSD longevity. It allows enough swap space to handle memory spikes and prevent crashes, while the lower swappiness and zswap usage will help minimize unnecessary writes to the SSD. The high-performance Samsung 980 Pro SSD should be able to handle the swap operations efficiently when needed. Remember to monitor your system’s memory usage and swap utilization over time. If you consistently see high swap usage, you may need to consider adding more physical RAM or reducing the number of concurrent VMs.

ChatGPT

Certainly! Given your setup and usage scenario, let’s analyze the pros and cons of different swap space configurations for your host system (NixOS), considering performance, system stability, and NVMe drive lifespan:

1. No Swap (Using Only RAM)

Pros:

  • No impact on NVMe drive lifespan due to swap writes.
  • Avoids potential performance degradation from swap usage.

Cons:

  • Risk of out-of-memory (OOM) crashes if physical memory (RAM) is fully utilized.
  • Can limit flexibility in handling memory-intensive tasks or spikes in memory usage.

Consideration: With 64GB of RAM, this setup can work well unless you frequently push the limits of your RAM capacity with your virtual machines.

2. Swap on NVMe SSD

Pros:

  • Provides additional virtual memory when RAM is fully utilized, potentially preventing OOM crashes.
  • Utilizes fast NVMe SSD speeds for swap operations.

Cons:

  • Increases wear on the NVMe drive due to swap writes, potentially reducing its lifespan over time (although modern SSDs are quite resilient).

Consideration: The Samsung 980 Pro is a high-performance SSD with good endurance, but frequent swapping could still impact its longevity, especially if your workload involves a lot of write-intensive operations.

3. Swap on a Separate HDD or SATA SSD

Pros:

  • Offloads swap operations from the NVMe SSD, preserving its lifespan.
  • Provides swap space without impacting the main SSD’s performance.

Cons:

  • Slower swap performance compared to NVMe SSD, which might impact VM performance if frequent swapping occurs.

Consideration: This approach can be beneficial if you are concerned about NVMe drive wear or if you prefer to segregate swap operations from your main storage.

Recommendation:

Based on the analysis, here’s a recommendation tailored to your setup:

Configure a moderate-sized swap partition on the NVMe SSD, but consider limiting its use to avoid excessive wear. Given your system’s RAM size and workload (running multiple VMs with varying memory demands), having a small amount of swap space (e.g., 8GB) on the NVMe SSD can act as a safety net for unexpected memory spikes without significantly impacting performance or risking drive longevity.

Additionally, monitor your system’s swap usage regularly. If you find that swap is being used frequently, consider adjusting your VM configurations (e.g., adjusting memory allocations, optimizing resource usage) to reduce the reliance on swap.

Key Points to Keep in Mind:

  • Monitor Swap Usage: Use tools like top, htop, or vmstat to monitor how much swap is being used regularly. This will help you adjust configurations as needed.
  • NVMe Endurance: While the Samsung 980 Pro is durable, frequent and heavy swap usage can still impact its lifespan over time. Keeping swap usage moderate and within reasonable limits is advisable.

By implementing a cautious approach with a small swap partition on the NVMe SSD, you can balance system stability, performance, and NVMe drive longevity effectively in your virtualization environment on NixOS.

Swapping onto disk should be avoided as much as possible because as TLATER said, even NVMe drives are way slower than RAM.

Zswap means the swapped data gets compressed before it is written to the disk. Therefore writing less to the disk and causing less wear while using more CPU.

Even cooler however is ZramSwap, that is a Swap partition in RAM that is then compressed. Giving you fast Swap at the cost of CPU time.
This podcast episode talks about it if your curious:

and below is my configuration of my ZramSwap setup:

2 Likes

Thanks for the link to the podcast, that was pretty interesting.

Zramswap sounds like a great choice for my VMs which have less memory and don’t use any hibernate type function. I like the idea that it doesn’t preallocate space out of the memory, so it’s not till you start running out that it starts compressing and swapping. Sounds smart. I will give it a whirl over the coming days. The config will make it easier too. :+1:

For the host though, I do use hibernate so no good for there.

zramSwap looks very cool.

I didn’t actually disable my existing swap when I tried this the first time.

	zramSwap = {
		enable = true;
	};

Before

$ swapon
NAME      TYPE      SIZE USED PRIO
/dev/vda1 partition 7.5G   0B   -2

After

$ swapon
NAME       TYPE      SIZE USED PRIO
/dev/vda1  partition 7.5G   0B   -2
/dev/zram0 partition 1.4G   0B    5
$ cat /proc/meminfo
MemTotal:        2982344 kB
MemFree:         1627684 kB
MemAvailable:    2361608 kB
Buffers:          107872 kB
Cached:           808416 kB
SwapCached:            0 kB
<snip/>
SwapTotal:       9355256 kB
SwapFree:        9355256 kB
Zswap:                 0 kB
Zswapped:              0 kB

Questions;
I noticed the priority column above and it looks like not only can you have multiple swap partitions but you can choose the order they are used in using the zramSwap.priority option. Is this a get-out-of-jail-free option to prevent a machine crashing in extreme memory usage case and if combined with some form of monitoring gives you a chance to adjust memory allocation? Guess I’m trying to decide if it is a bad thing to leave a ‘disk’ swap on and how I would monitor it?

It looks there is also an option to send non-compressible pages to disk instead of memory (due to no advantage just moving them around in memory). Should zramSwap.writebackDevice point to a ‘disk’ swap partition or just a normal disk folder and in which case would I need to have a normal swap partition anyway?

With regards to zramSwap.algorithm, it looks my kernel supports all the algorithms and has selected zstd.

$ cat /sys/class/block/zram*/comp_algorithm
lzo lzo-rle lz4 lz4hc 842 [zstd] 

Looking for a comparison of the algorithms I found the following;

Algorithm	Compress Ratio	Compress Speed	Decompress Speed	Memory Use
LZO			Good			Very Fast		Very Fast			Low
LZO-RLE		Good			Very Fast		Very Fast			Low
LZ4			Good			Extremely Fast	Extremely Fast		Low
LZ4HC		Very Good		Slow			Extremely Fast		Medium
842			Moderate		Fast			Fast				Very Low
ZSTD		Excellent		Fast			Fast				Medium

Key points about each algorithm:
LZO: Offers a good balance between compression ratio and speed. It's very
fast for both compression and decompression, making it suitable for systems
where CPU performance is a concern
LZO-RLE: Similar to LZO but with additional run-length encoding. It can
provide slightly better compression ratios for certain types of data while
maintaining high speed
LZ4: Extremely fast compression and decompression speeds, often the
fastest option. It provides good compression ratios and is an excellent
choice for systems prioritizing speed over maximum compression
LZ4HC: A high-compression variant of LZ4. It offers better compression
ratios than standard LZ4 but at the cost of slower compression speed.
Decompression speed remains extremely fast
842: Developed by IBM, it offers moderate compression with very low
memory usage. It's a good option for systems with limited resources
ZSTD (Zstandard): Provides excellent compression ratios, often the best
among these options. It offers a good balance between compression ratio
and speed, making it a strong all-around choice

What confused me was that ZSTD is described above as having an Excellent Compression Ratio but also Medium Memory Usage. :man_shrugging: Wouldn’t an Excellent Compression Ratio lead to Low Memory Usage or am I misunderstanding what this means? From a usage point of view, why should I care about compression ratio at all? I understand CPU time isn’t shown here but ultimately doesn’t fast compression/decompression indicate less CPU time? Trying to work out why I wouldn’t ‘swap’ :wink: to LZ4?

Thanks for any thoughts.

I reckon it is always possible to run out of memory.
You just make it less likely in normal scenarios.

That is certainly possible yes.

I left mine enabled just in case, especially since it doesn’t really get in my way.
As for monitoring, there are tons of options you could choose.
Basically every monitoring software I know can monitor memory usage.
Interpreting the data depends a lot on the application running on your system.

If you’re not short on RAM it is probably better to leave it the ZRAMSwap, just because it is way faster.
Since you have the normal Swap still active stuff will get moved automatically to it when the fast pool gets used up.

This refers to the memory usage while compressing the data, not while storing it.

A better compression allows you to store more data in fast Swap.
As you stated correctly at the cost of a little CPU time.

After doing a bit of research myself I left it at ZSTD for my system.
Especially on a normal x86 system that seems to be okay.
On something like a Raspberry Pi it might make sense to use LZ4.

1 Like