I use Nix to describe everything relevant for my Golang project; how to build it, how to deploy it on NixOS and the exact tools used for development.
There is however one thing I struggle with - writing tests. I have a couple of unit tests written in Go. These work fine as they are executed as a part of the build process (and they are fast, so it is acceptable). I don’t have complex integration tests yet, but they are not a problem either as there are lots of articles about writing these (provisioning virtual machines running services and plumbing it all together) with a lot of support in nixpkgs.
There are still a lot of tests that don’t fit in either of the categories. All those “utility” tests we are used to from classical CI systems like checking the code style, checking that the code coverage is high enough and many more.
How do I make these in Nix flakes world? Flakes define a checks output but there is no documentation apart from put the default package here to check it can be built.
I encountered nixt, which looks very promising, but it also lacks any documentation and it isn’t very clear how it integrates with flakes.
How do I make these in Nix flakes world? Flakes define a checks output but there is no documentation apart from put the default package here to check it can be built.
Basically, you could put a different package there to check that this other package can be built. For example, a package that takes the source as an input and runs all the utility checks and puts a brief log into the output path — and fails if any of the tests fail. Presumably you want to output the details at least in case of failure and maybe in all cases, so that the Nix build log of the failed build of the test package contains what you want to know about the test failures.
You might want to look at nixpkgs/pkgs/tools/text/link-grammar/default.nix to see how it defines passthru.tests just as an example of how a test-running pseudo-package could look like.
Thanks. I was thinking about something like that, but I didn’t know about runCommand and also was unsure whether this is a way to go. Sometimes I can produce meaningful results like coverage report, but mostly the script will look like theCommand && echo "ok" > $out
This is not that bad in the end. Maybe one more question: shall I include the_linter in buildInputs, or shall I just reference it like ${pkgs.the_linter}/bin/the_linter?
It works both ways, but it affects only whether the_linter is in buildInputs, or in inputdrvs only. Which one is better?
Thank you. I have already read the article before and while it is very interesting and useful for integration tests, it is an overkill for simple tasks like linting