Allow internet access during checkPhase?

I’m trying to package a library (pytorch-lightning) in which a central component is its ability to retrieve models & datasets from the internet and run them locally. A common workflow in ML during distributed training is to save model checkpoints on a centralized server, with the ability to resume training if a slave fails / is killed.

The build itself is pure, but the py.test code understandably requires internet access, as after all, the code using requests is what is being tested.

To package for nixpkgs, I could manually disable all tests that require network access (21 tests). This makes me unhappy for two reasons:

  • the maintenance burden is high
  • I actually want these tests to be run to verify correctness

It seems to me that no network access during checkPhase is an anti-pattern. I think the intention is to prevent commiters from intentionally or inadvertently introducing impurities into /nix/store but seems we throw the baby out with the bathwater. Many packages across nixpkgs have no tests because of this restriction, particularly webservers (e.g. gunicorn), which is concerning when you consider that some people are using NixOS in production for webservers with source code modifications & no tests.

What do you think of the prospect of allowing network access during checkPhase? Perhaps there’s a way to do this that also maintains sandbox integrity, e.g. running checkPhase with read permissions but not write permissions to $out?

What do you think of the prospect of allowing network access during checkPhase? Perhaps there’s a way to do this that also maintains sandbox integrity, e.g. running checkPhase with read permissions but not write permissions to $out?

  1. Ideally, upstream deployment changes would not break our tests… (for grabbing the models, I guess it would be nice to download the models in a fetchurl then put them under proper URLs in a lighttpd or whatever instance and check against this server)

  2. You can write a fixed-output derivation that runs the networked tests then either fails or produces «${version} tested OK» in the output. So the succesful output is known in advance, but success or failure is determined by running the test (with network access)

I don’t see how this would help in the gunicorn case? Also seems like a lot of work—every time the test includes a new model, I’d have to manually add this.

This sounds promising, thanks! Are you aware of an example? I didn’t fully grok what you were describing.

I don’t see how this would help in the gunicorn case? Also seems like a lot of work—every time the test includes a new model, I’d have to manually add this.

Well, it is not even clear from its description why it would need remote networking

But yeah, ideal case is only feasible if upstream package is clean enough to generate the fetchers (like we do with dependencies in LibreOffice case)

This sounds promising, thanks! Are you aware of an example? I didn’t fully grok what you were describing.

Well, does reading fetchTarball code help?

I guess you need something like http://profpatsch.de/talks/nix/nixpkgs-tests/ to compose stuff.

Doing that with a fixed output derivation is likely to give you problems. Doesn’t matter what build inputs change; the fixed output derivation is never rebuilt unless the hash in the nix code changes.

Not if the result,and ergo the output hash,depends on something like the source hash

Nope. Consider an example:

{ stdenv }:

rec {
  sha256 = "...";

  foo = stdenv.mkDerivation {
    name = "foobar";
    src = ./foo;

    outputHashMode = "recursive";
    outputHashAlgo = "sha256";
    outputHash = sha256;
  };

  bar = stdenv.mkDerivation {
    name = "foobar";
    src = ./bar;

    outputHashMode = "recursive";
    outputHashAlgo = "sha256";
    outputHash = sha256;
  };
}

This might make two .drv files, but since they share both an outputHash and a name, they will have the same outPath. Obviously if they’re different, at least one would fail since the sha256 will be wrong for it; but if the other one builds successfully, attempting to build the failing one afterward will falsely succeed, because its outPath is already in the store. When Nix sees a fixed-output derivation with the same name and declared hash as one that it’s already built, it just assumes both derivations are meant to produce identical outputs and chooses not to build the new one, even if its inputs are different and might fail to build.

The consequence is that if you try to make a fixed output derivation for you to rapidly develop on, you have to manually falsify the hash every time you want to build so that nix doesn’t think it’s a duplicate of the last build. i.e. assuming foo in the above example is the correct one, doing rm ./foo/Makefile and then building foo again without changing its outputHash value will instantly succeed, even though the derivation itself is no longer buildable.

I was thinking more along the lines of putting src.drvPathinstead of versionin the output file.
Of course the output file hash needs to be recomputed whenever src changes.

How would that be any different in the example I described?

rec{
    pkgToTest=...;
    test=''
      #do stuff
      echo ${toString pkgToTest} succeeds >$out
    '';
   outputHash=...;
}

That code doesn’t look like a derivation, but if you meant what I think you meant, then no. You still have to manually change the outputHash variable to get it to attempt a rebuild, even though pkgToTest may have changed. Try it out. When Nix sees an outputHash it already has, it always skips the build, even if the build inputs are different.

2 Likes