Hello, I currently have the issue that keeping my laptop freezes after being powered on overnight or at the end of the day (20-30 tabs being loaded in memory and about 50-100 unloaded ones). When launching Firefox I start out with about 1 GB of ram usage, but over a day it will creep up to the 16gb limit which frankly is absurd, so I would like to figure out what I’m doing wrong here.
I am looking for help with tools that I can set up to look at the memory changes, to see if anything can be learned while leaving it unattended or if this is a known problem with an wayland/kde/extensions/firefox itself. As this has become a daily issue now I am quite keen on getting it resolved, thanks in advance.
Tried:
Swapping to a later Firefox version (128esr → 130.0.1)
Removing any non-vital extension.
Extensions:
7TV
close-inactive-tabs
Consent-O-Matic
Dark Reader
DeArrow
Enhancer for YouTube
I don’t care about cookies
NixOS Options Search Engine
NixOS Packages Search Engine
Plasma Integration
Sidebery
SponsorBlock for YouTube
uBlock Origin
Unhook
Unhook Twitch
If you’re a tab hoarder like me, you can easily reach a dozen or so GiB in tabs in a day though without anything abnormal happening.
If you just have a bunch of tabs without any one using a ton of memory, you can trim the amount a little using about:unloads (or by, shudder, closing tabs).
It’s also a common firefox thing in my experience, I’ve had better results with the chromium family of browsers (though you get less privacy in some respects, so pick your poison).
I see that the thing I failed to mention was what I see happening is that I open up a bunch of tabs, leave them around, the memory allocation of Firefox keeps increasing, approaching the ram limit of my device.
Whenever I resume, open 1-2 new tabs and it freezes since I don’t have swap. I end up having to kill power to the computer and reset it.
I have been trying to use about:processes but about:unloads, both of these tools however lack the capability of telling me why the ram size keeps increasing with time without any input of new taps, just existing loaded tabs being larger.
I guess a way to solve it would be to make/find an extension that calls the associated function that that the Unload button does in about:unloads. I would rather find the source of the problem rather than hacking a fix.
Yeah the Chromium projects proximity to Google is a bit of a deal breaker for me, and sadly that just makes most other browsers a no-go for me.
As I said in my previous reply I’d rather figure it out in since doing so might benefit the Firefox project if it turns out not just being a single user issue.
I’m looking for tools that can track memory usage over time and if possible give me information on what’s being allocated over said time. Any information gained this way should help me make an initial assessment if this is a me problem, a NixOS problem, or a Firefox problem.
Anything that would equate to similar information gains in other ways than my presumed path forward is of course also appericiated to know about, and if you think I’m stupid in my approach I’d love to hear about it (with an alternative approach appericiated but ultimately optional).
Like, I dont know if I can set up Valgrind to track a compiled program and use it in a way to achieve what I want here, wondering if anyone knows anything.
If it happens without your input and is that critical, you’ve likely got a website or two open that has a memory leak.
I’ve seen singular tabs of innocuous websites grow to many gigabytes of memory using the wonders of modern Javascript.
Keep an eye on about:processes (sorted by memory usage) to catch these.
You should have swap. Not just for OOM but in general.
Firefox has functionality built in to unload tabs automatically on low memory. I can’t remember the config flag of the top of my head but you’d set it to the minimum amount of usable memory in MiB that, when under, would cause Firefox to start unloading tabs by itself.
There’s also another config flag that specifies how long a tab must have been open before it can be unloaded by this which defaults to 10min which is usually too long.
Setting this up correctly was critical when I was using an 8GB RAM laptop.
Makes a lot of sense, thank you for the insights. In particular this:
It tells me I should find that setting and hopefully it should solve my problem.
Everything else has also been helpful but more informative than actionable; Keeping an eye on something is what I’ve been trying to avoid as it is infrequent enough to not warrant it and if I’m already commiting myself to a routine action I’d choose just closing down firefox regularly over an improvised monitoring routine.
__
Great information and I agree on the outdated swap recommend, I’m buying 16gb extra in the future anyways.
I thank you both for the comments and will append any permanent solutions I find.
The ability to page out anonymous pages is not an “outdated requirement”. Contrary to popular belief, handling OOM situations more gracefully is not the primary purpose of swap.
browser.low_commit_space_threshold_mb is the setting you want to set to a few gigs and you probably want to lower the delay before a tab is able to be unloaded to a minute or so.
RAM is one of the best investments you can make as a chronic tab hoarder
Don’t worry, I’ve read that one, it comes up every time swap is discussed. I fundamentally consider swap outdated nevertheless, as it does not reduce memory pressure in the scenarios where I would personally find it valuable, and simply leads to thrashing instead of hanging (or OOM-killing). At best you get a couple extra seconds to manually kill the thing that would’ve been OOM-killed anyway - and you have to pray the thrashing isn’t so bad that you can type in pkill into a terminal quickly enough… And if the machine is unattended, swap is even more unneeded.
The reason I don’t value swap is because if thrashing occurs, it’s often already too late - disk I/O is magnitudes slower than RAM, so at that point the CPU is wasting more cycles trying to shuffle pages around than actually doing whatever you wanted it to. Including, possibly, killing the offending process.
I’m not trying to hide my edits, just a bad habit of re-editing repeatedly…
I agree that swap is not about OOM, it’s about managing memory pressure, which doesn’t really work if your workload just needs more RAM. Swap is not RAM, and the CPU won’t treat it as such.
Sticking this in a collapsible block because this wall of text feels very OT now, admittedly on me here:
Having swap is a reasonably important part of a well functioning system. Without it, sane memory management becomes harder to achieve.
subjective statement, needs something to back this up, which I’m sure they’ll get to later in the article
Swap is not generally about getting emergency memory, it’s about making memory reclamation egalitarian and efficient. In fact, using it as “emergency memory” is generally actively harmful.
agreed with the last sentence, the first bit is a claim that needs support
Disabling swap does not prevent disk I/O from becoming a problem under memory contention. Instead, it simply shifts the disk I/O thrashing from anonymous pages to file pages. Not only may this be less efficient, as we have a smaller pool of pages to select from for reclaim, but it may also contribute to getting into this high contention state in the first place.
agreed
The swapper on kernels before 4.0 has a lot of pitfalls, and has contributed to a lot of people’s negative perceptions of swap due to its overeagerness to swap out pages. On kernels >4.0, the situation is significantly better.
I don’t recall what linux was like prior to 4.0, but back then I was using dinky systems with HDDs so I’ll take it at face value. I read this as “swapping used to be terrible and we’ve improved it”, but I wouldn’t exactly call that the same as “swap is better than no-swap”
On SSDs, swapping out anonymous pages and reclaiming file pages are essentially equivalent in terms of performance and latency. On older spinning disks, swap reads are slower due to random reads, so a lower vm.swappiness setting makes sense there (read on for more about vm.swappiness).
Reasonable, but doesn’t seem to provide a point for or against using swap, at best this is saying “having swap vs. no-swap is functionally equivalent because you’re writing to disk in both cases”.
Disabling swap doesn’t prevent pathological behaviour at near-OOM, although it’s true that having swap may prolong it. Whether the global OOM killer is invoked with or without swap, or was invoked sooner or later, the result is the same: you are left with a system in an unpredictable state. Having no swap doesn’t avoid this.
agreed
You can achieve better swap behaviour under memory pressure and prevent thrashing by utilising memory.low and friends in cgroup v2.
I’m not a cgroup expert so again I’ll take this at face value.
But I don’t think most people who are instructed/suggested to use swap know cgroups well enough either, so unless memory.* is preconfigured to some sane value, again I struggle to see how swap improves the situation.
For reference I’m looking at the section Control Group v2 — The Linux Kernel documentation, and I find it a bit overwhelming to comprehend how I should be managing memory from this. I’m sure there’s a stackexchange question answering this in detail, but I’ll also admit I’m a bit lazy to spend that much time on manually configuring memory management.
The suggestion appears to reduce to “get good at cgroups if you want swap to be useful”, at which point idk, swap just seems very… niche?
The rest of the article appears to make the case like this (and I read “memory pressure” and “memory contention” to mean the same thing here):
Some pages are unreclaimable due to a lack of backing store, and are forced to stay in-memory in the absence of swap
Under lower memory contention, swap can be used to improve cache hits by freeing up main memory
Under higher memory contention, thrashing may be avoidable due to random chance if swap is present; in its absence, the likelihood of thrashing increases as some pages are forced to stay in-memory as mentioned prior
Under memory usage spikes, thrashing increases with swap, and without swap the OOM killer sweeps in sooner
However, I would point out the following:
Under lower memory contention, with sufficient RAM, I don’t see how swap matters, as there’s enough RAM to just shove more pages in
Under higher memory contention, the article’s claim seems self-contradictory. If some memory was unreclaimable in the first place, then swapping it out will lead to it being swapped back in later on, again leading to thrashing. And if the memory was in fact reclaimable, then no-swap appears to allows it to be reclaimed instead of simply swapped out to be swapped back in later. I don’t see how swap improves the situation here.
Under memory usage spikes, they explicitly say that neither is objectively better nor worse, and though they rightly say that if the OOM killer shows up it’s probably too late, this just seems to be a special case of the “higher memory contention” mentioned above, so I’d make the same point here.
Their conclusion is “swap is a useful tool to allow equality of reclamation of memory pages” and I fundamentally do not see this to be true (because as far as I can tell, proper reclamation is just as if not more likely without swap).
If anything the article seems to make a much stronger case for using cgroup v2 (which I don’t disagree with, but that’s orthogonal to the swap discussion).