I am trying to migrate my container infrastructure (both CI and production) to built-by-nix images.
I successfully managed to build image using dockerTools.buildImage. But I found out that this image is two times bigger than the one crafted with alpine linux. After inspecting the image I found out that it contains the whole llvm package in it. Indeed, dependency of the package I specify in buildImage.contents has nativeBuildInputs = [ llvm_8 ] but it’s there only for this dependency to static link with it, there is no need for the whole llvm distro neither in the llvm-lib in the docker image:
I understand that I can modify docker derivation to manually extract binaries from my app package but I wonder if there is any way to get rid from unneeded nativeBuildInputs in the first place, because I suspect llvm won’t be garbage collected until the package with transitive nativeBuildInputs containing it won’t installed.
My current docker derivation:
nativeBuildInputs = [ llvm_8 ] are present in one of myservice's deps.
Myservice is nothing more than a derivation generated by crate2nix with nativeBuildInputs overriden to satisfy linker:
protobuf makes sense, as it’s likely only used to make the language bindings, and then nix wont find any references to it’s store path, so it doesnt end up in the result.
for the others, you need to check the result outputs to see where there’s store paths to those packages. Sometimes it’s debug info, sometimes it’s some configuration files which will have their store path, and then nix determines that it needs to include those paths in the closure.
Indeed, grep shows that binary myservice matches llvm store path but ldd executed against myservice binary does not show llvm (but shows libxml, libffi, etc which is expected since they aren’t static linked). readelf -a myservice also doesn’t list llvm store.
Running grep against strip-ed binary also matches llvm store path, both llvm-8.0.1 and llvm-8.0.1-lib.
I’ll get back when I dig more info.