How to work around `patchelf: maximum file size exceeded` error?

In attempting to upgrade tensorflow (PR here) I came across a weird patchelf error:

patchelf: maximum file size exceeded

This only happens when running patchelf on files that were built with the CUDA extensions. The CPU-only tensorflow builds without any issue.

Does anyone know what this means? How can I work around it to package tensorflow? I don’t have much control over the size of the built binaries after all…

3 Likes

Reading patchelf fails on big files · Issue #305 · NixOS/patchelf · GitHub it looks like it’s a patchelf’s limitation of hardcoding the value for how much more patchelf can resize original binary (the error message could be clearer).

Once Remove limits on output file size. by andrewla · Pull Request #256 · NixOS/patchelf · GitHub is merged perhaps you could use patchelfUnstable package as a workaround until fixed version is available in stdenv.

1 Like

I’m actually testing that patch right now, but I have not been able to get it to work. I’m still getting the same error message, which is very surprising since after that patch the word “maximum” doesn’t even show up in patchelf.cc AFAICT.

I documented everything in more detail starting here: Remove limits on output file size. by andrewla · Pull Request #256 · NixOS/patchelf · GitHub

Turns out that this issue was kind of subtle… But patching patchelf with that PR does fix it.

The output limit was addressed here: Remove limits on output file size. by andrewla · Pull Request #256 · NixOS/patchelf · GitHub

2 Likes