I’m attempting to package pyqt6 for the sake of finally updating a horribly out of date browser backed for qutebrowser. I’m stuck now because I cannot get qt6.webengine to successfully build. But the strange thing is that it fails because apparently it received SIGTERM somehow, and it doesn’t just happen to me, but it also happened on Hydra (at a different point in the buid no less).
Annoyingly, the build finishes just fine outside of the sandbox, and the only clue here is the message about receiving a termination signal. Anybody have any nice theories on what could be triggering this signal? I’m completely confused and debugging is a massive pain since it takes over an hour to even reach the failure.
My only guess is that some kind of limit is being reached, but I had over 10GB of free memory around the time the build failed.
After comparison of my user settings and the build container, my best guess is that ulimit -n (open files) needs an increase, as it is the only value which is significantly different, and its value (4096) could be conceivably too low for a project as massive as chromium.
The service runninging Nix on Hydra will need LimitNOFILE to be increased or this package will never build successfully. Do you know who can take care of it (or if the code is on github, where it’s at)?
That’s what I was wondering as well, but for whatever reason, chromium has not been affected so far. But I successfully built it yesterday from this branch after increasing the limit in nix-daemon’s systemd service on my system. I was planning on uploading it to my cachix later in case anyone else wants to make use of it for now.
edit
If anybody has been dying for a qutebrowser backend update, you can build it from the branch I linked above. I pushed it to this cachix. I’ll keep working to get this merged, but I have some hacks to fix first.
The build is now failing again after the bump to 6.4.0. It seems to fail toward the end, so either it’s the process limit again, or perhaps an oom (which is another likely scenario, since the end of the build has a ton of “jumbo” objects in memory). I don’t have priviledge in hydra to do anything about it in either case, so I have to complain here again
I do have a build of it in my personal cache, so the build itself is definitely not broken.
Also suggested by RAM graph of the machine running the build at that moment: Grafana
Log shows only -j24 was passed. I wouldn’t expect that to have huge RAM requirements… unless the build system is trying to be clever and e.g. detecting CPU count, etc.
The only shorter-term approach that I can see is some kind of tweaks in the derivation to avoid requiring that much RAM. (anyone should feel free to suggest, ideally a PR)