I actually love to see people sharing when Nix didn’t work for them. These kinds of failures are important - it’s why solutions are created.
I wasn’t aware of the JOOQ approach to require a running a database server during the build. How does that work? Is the database server supposed to have a schema in it that the build generates code for? Seems like kind of a chicken and egg problem.
This mirrors my own experience of doing JVM stuff in nixos. We were on AWS so we had a nicer time tho: getting FOD from s3 requires network, and machine-level identity credentials are accessible via the metadata service so the whole issue with managing credentials just goes away, so we ended up being quite happy with a build pipeline that doesnt build the application using nix, but offloads a fixed version to s3 and generates a nixos module that will fetch it at nixos-rebuild and do the rest of the service setup.
About the authenticated resource problem, I had the same issue when trying to fetch resources from the GitLab API which requires the token to be put in a header. The solution I settled on is to build the package in CI, passing the token using NIX_CURL_OPTS='--header 'PRIVATE-TOKEN: mySecret' nix build --impure .#my-authenticated-resource
, upload that to a nix cache, and rely on that nix cache in other environments to fetch the resource.
Another solution I considered but haven’t tried is to run a (local) proxy which would take care of the authentication, a bit like GitHub - polygon/nix-buildproxy: Providing reproducible HTTP/HTTPS responders to builds that just can not live without
It’s for sure too bad there are no clean ways to handle this…
You either configure jOOQ to setup the database by passing an init DDL script, or you configure something like Flyway to apply all your migration scripts to bring the database into the right state. This is common if applications migrate the database themselves.