victoriametrics is just using buildGoModule, so maybe gomod2nix isn’t really needed?
I thought the advantage of gomod2nix was that you would get a lot more nix caching for each module, verse buildGoModule which relies on go module caching, but maybe I miss understand.
Yeah, gomod2nix is not really needed anymore. I also just use buildGoModule. I assume most things in Nixpkgs use buildGoModule.
For using Go outside of Nixpkgs, if you want an example of a large scale project with a very intricate Go + Nix integration, you might find what Tailscale has done to be interesting. They also have a way to update the vendor hash from systems without Nix (See nardump and update-flake.sh) in addition to integrating with direnv.
Not necessarily highly Go-specific, but I’ve also taken to moving my “CI” tasks into flake checks, such as checking code formatting and running any tests (including NixOS tests, of course.) I have several projects that do this, but here’s one.
The reason I’m curious about this is that I’ve been wondering how nix could potentially help with making CI/CD faster for building golang binaries and putting these binaries into containers.
To attempt to summarize, it sounds like buildGoModule is working well for people, with the downside being there isn’t nix level caching.
This leaves me wondering if gomod2nix makes sense for the “nix strategic direction” or not.
I guess collecting data about how long the alternative buildGoModule/gomod2nix methods takes in reality will help inform the decision.
Today, I setup a little repo with a small go program, and compared the speed and size of building the code into a container in various ways.
Summary is here, with detailed output in the readme.md also
In theory, you could git clone this and do “make” and it will build them all for you also. Hopefully your results don’t differ much.
Summary
Maybe I have something wrong with gomod2nix, because it’s not faster, and the container image is a lot larger. I suspect it’s larger because it has pulled in a lot more glibc stuffs, but I clearly need to debug this to bring the size down.
Maybe this isn’t a great test, because there aren’t very many modules, and I’m not building a whole bunch of go code, which would get better nix reuse between programs.
Not entirely Nix related
I’ve recently discovered the docker caching, which is really great. If anyone is building go within docker containers, I strongly recommend doing this. The docker cache make building go really fast!
Today I also tried out Athens GOPROXY caching which is also pretty good . I’m definitely planning to add Athens caching to our k8s clusters now. However, after banging my head for a while, I couldn’t get nix build to use the GO_PROXY, which is annoying. It turns out this is apparently deliberate, although I’m not sure I really understand why ( I’m clearly missing something ).
configurePhase = args.configurePhase or (''
runHook preConfigure
export GOCACHE=$TMPDIR/go-cache
export GOPATH="$TMPDIR/go"
export GOPROXY=off <---- DISABLED
export GOSUMDB=off <---- Not sure why they do this. Scary!
It sort of looks like you might be able to set a impureEnvVars GOPROXY, but I couldn’t get this to work.
I also found this related discourse thread, but again, I’m not clear how go being about to pull from a GO_PROXY is really a problem. go.sum is supposed to catch any shenanigans.
Does gomod2nix disable cgo by default? If not, you should do that to reach parity with buildGoModule.
It only caches the downloading independently, which really only makes a dent when you have many independent binaries that depend on the same dependencies. In this case, it makes sense that it didn’t make much of a difference, as there’s virtually no delta between the work that either has to do.