Restricted nix copy

Hi everyone:

First of all, a disclaimer: I’ve never used discord before or this forum, so let me know if there is a better place to ask this, or any other advice about where to go, and apologies in advance if this should be asked elsewhere.

Anyway, I work at a company that’s using nix as a build system and for deployment. We want to be able to use nix copy to move programs between hosts, but also have some centralized control over what is allowed to be copied. For instance, it’s pretty easy to get source code in your runtime closure, if you compile C without -ffile-prefix-map, or if you compile with debugging symbols. We can freely copy source around internally, but are not allowed to copy it to external machines, and nix copy makes it too easy to accidentally copy more than you bargained for.

I have a plan, which I’ll describe below, but first, is this something which has been discussed or planned before? Perhaps there are RFCs for this kind of thing, or previous talk, or perhaps it’s even possible currently and I just don’t know how? Supposing I go ahead with my plan, it would be nicer to get it upstreamed than continue maintaining a branch, so advice about how get involved in that process would be appreciated.

That said, the plan (very preliminary). I looked into path signing, but it seems like it’s backwards, it’s oriented around the copy-to being able to reject paths because it doesn’t trust them, not the copy-from rejecting to send paths because it doesn’t trust the copy-to. So I think it can’t be made to do what I want.

It looks like copyStorePath in store-api.cc is in the right place to intercept copies from all nix commands and be able to make a per-path decision. We wind up with two problems:

  1. Differentiate destinations, so we can apply different rules based on the destination.

    • An easy hack I thought of was to overload the protocol version. The idea being a trusted host will have been given the patched nix, which will support a higher protocol version. copyStorePath already has visibility on the protocol so it can just look at it to know if the host is trusted or not. The problems with this are that it’s just advisory (a patched client could just claim whatever version number). We don’t mind because this is meant to prevent mistakes, not actually be secure. Another problem is that it’s implicit and is overloading a protocol number intended for, well protocol versions, and of course the protocol will change on its own for its own reasons and we’d have to keep up with that, so I suspect the general nix community will not love the idea. But it’s quick and easy.
    • A more proper solution would be some kind of host key thing, as far as I know (and I’m no expert) we’d need to propagate some ssh metadata up from the transport layer… by the time we see it in copyStorePath it’s just a generic data sink.
    • Or perhaps there’s some other host-level metadata we could latch on to, or create.
      Like protocol version, but without the baggage of being intended for something else.
  2. Differentiate paths, to know which ones are ok to copy and which are not. I’d probably implement this as asking an external process, so it can use whatever mechanism it wants to decide. That said, here are ideas about what it could use:

    • Use the name: Because internally we have central control over the derivation names, and we enforce a naming scheme based on source directory, we can reliably make decisions based on the source directory or even just a hardcoded list of allowed names. Since we are ok to copy anything out of nixpkgs, we can blanket allow any path that doesn’t match the internal naming scheme.

    • Put a file inside, e.g. nix-support/public. Then individual builds can opt-in, without a centralized list. Won’t work for single-file outputs of course.

    • Some other metadata associated with the path, that’s not the name. I don’t know what there is, but signing keys seem to be one. I assume they’re in the sqlite db. This would allow per-build opt-in and not clutter up the output with a dummy file. But it seems a lot more complicated and less visible.

So my conclusions were: patch store-api.cc, use the protocol number hack (unless something better pops up), and add a “verify-copy” type option which will point to a program that takes store paths on the input and returns which ones are not ok to copy. Internally it will just use the path name, but it could also look inside them to make its decision (i.e. is there source in there?).

A whole other tack on this is to prevent the unwanted runtime deps in the first place. It’s not sufficient because in the case of debug symbols they are in fact wanted runtime deps, just don’t want to copy them to some places. But I did manage to get checkOutputs to work, while disallowedRequisites seemed to have no effect. Also I could find no way to override the runtime dep detection mechanism, but I wrote a binary patcher to simply 0-out /nix/path hashes at the end of the build, which does work. Of course if the deps were legit after all now you have something that may break at runtime, but that’s the deal.

Thoughts? Thanks in advance!

For a derivation-level approach, you could look into allowedReferences, allowedRequisites, disallowedReferences, and disallowedRequisites. Introduction - Nix Reference Manual

Basically, they let you blacklist or whitelist certain dependencies, either as direct dependencies or in the entire closure.

1 Like

That’s the thing that didn’t work, but checkOutputs did. Apparently they’re deprecated in favor of checkOutputs, though presumably the not working part is on accident.

Anyway, it’s not quite right because I don’t want to prevent the deps… I only want to prevent copying the deps to certain places.

That’s a pretty substantial bug if disallowedRequisites didn’t work. Definitely open an issue with steps to reproduce.

Anyway, it’s not quite right because I don’t want to prevent the deps… I only want to prevent copying the deps to certain places.

But you do want to prevent the dep. You don’t want your build to depend on a path that isn’t going to be there.

The most straightforward way to do this would be to prevent the dep. You can create some functions that make it easy to describe the full set of disallowed dependencies. mkDerivationWithoutSources or similar.

Running an alternate version of Nix for this use-case may solve the issue, but will most likely introduce more fragility and complexity than alternate solutions to the original problem of accidental copying of sources. I’d suggest fully exploring alternate solutions or a bit of work to understand/fix why the dissallowedRequisites approach didn’t work.

It sounds like an example of pushing complexity around. Solving this with a patched Nix now means that you have to be careful that you are running that patched Nix everywhere, along with the headache of maintaining that fork in perpetuity.

1 Like

Not exactly… the dep is in fact correct, and I do require it. What I want is to prevent that dep being copied to certain places. Copy to unrestricted places is fine, that’s the key here. So the difference in policy is not dependent on the derivation alone, but the combination of the derivation and the place we are deploying it. Therefore it must be done at copy time, not at build time. Think of the use case for signed paths, but reverse the direction of trust.

Of course trying to copy something without one of its runtime deps will want to abort the whole copy, to avoid breaking store invariants, which is fine. The other option would be to make a broken install, which might be useful too, but only if it’s explicitly marked that way and won’t freak out GC or repair-path, or whatever else that expects invariants to hold.

I’m totally open to the idea of alternate solutions, if there are any!

If you know the full set of derivations that need to be restricted, you can pre-populate false ones into the “restricted store”. So the copy would be okay because it’s already present. Ugly, problems with GC, etc…

Another approach would be to not use nix copy directly, but to wrap nix with something that would do the checking on behalf of the user. It could examine the input closure - check for source code, secrets, size, etc - with a centrally managed policy.

The next step in precision would be to place this into store-api.cc as you suggest, but I fear the complication you introduce in all this starts to outweigh the risk of accidental copies. (Though I don’t have the full context of the usage patterns or situation. Are we only mitigating the risk? How severe is a “spillage”? Do you have control over the receiving end?)

Another mechanism may be to use an alternate store path for your internal network (i’d have to double check if this works the right way). Then only specifically use a path-rewriting mechanism (that also checks policy) to transfer things from the internal->external stores. It removes the possibility of the chance of accidental spillage, but increases the cost of compatibility and ability to transfer.

Then there are LD_PRELOAD tricks…

^^^ all of this is just thinking aloud…

How is it that the derivation requires the dep, but it’s ok for the dep to not be present in certain locations?

For 1, that’s clever but it seems pretty implicit and dangerous. One global repair and now it wants to download that stuff. And it’s a good basic principle that any solution must involve the sender not sending, not relying on the receiver choosing to reject.

For 2, we actually already do this. It’s not suitable in general because it relies on you running that checker first. Though I suppose we could make that automatic by actually wrapping the nix command and intercepting when argv[1] == “copy”. That’s not bad actually! It wouldn’t necessarily catch all copy code paths and it’s a bit wonky because it has to pre-parse the command lines, but the nix command line is not that complicated. It will also have to figure out what copy will copy, but nix-store --requisites does that. It can even do host key checking, by just separately asking ssh if the host is who they say they are.

For 3, I haven’t tried it but it doesn’t seem so bad. Parse a new option, call a subprocess, possibly throw an exception… what could go wrong? Well, I guess a lot could go wrong messing with protocol numbers, and it would be no good if it required modifying every place that made an assertion about the protocol. And, yes only mitigating is acceptable since people can still manually scp, a spillage would be bad but not an immediate security leak, it would just remove a layer of protection. And yes, control over the receiving end, though it’s better to fail-safe and not rely on the receiver being configured a certain way.

For 4, I don’t understand. Do you mean use e.g. /nix/internal and /nix/external and explicitly copy from one to the other, while patching binaries in place? If so it does seem really complicated and odd, and that copy+patch will just be the original problem again.

I like 2, it seems like a nice “stupid simple” compromise that avoids messing with nix internals. Excellent idea! I’ll probably try that one first!

On further thought, the “unexpected code paths” I think is the most dangerous thing about it. For instance, a remote build will also trigger a copy, so the wrapper would also need to intercept remote build command lines, and of course people can theoretically set remote builders in nix.conf so it’s not so simple. We can ask nix show-config but you can see how it can get complicated…

:wave: @elaforge !

An alternate solution if you can afford it is to use some kind of “bastion store” and force every copy from the internal network to the external one to go through this bastion.
This bastion could be exposed as an http binary cache with a small proxy that would forbid uploading unwanted store paths (or just be a proxy over your internal binary cache that would forbid downloading them).

The advantage is that the binary store protocol is infinitely simpler and easier to hack on than the broader store protocol (see for example GitHub - edolstra/nix-serve: A standalone Nix binary cache server for a <100Loc implementation of a read-only cache on top of an existing store).

Hi there, long time no see!

I guess the idea is to have separate internal and external binary caches, right? Then the external one is only accessible via a proxy, which can enforce its rules. That makes sense, and it would be a good solution for software distribution in general, if we could convince the external world to accept nix as a distribution mechanism… which we probably can’t. But still, it’s worth keeping in mind for locking down internal distribution.

However, basic nix copy works by just sshing someplace and looking for a nix-store command, right? So as long as that someplace has nix installed, and we have ssh access, nix copy will do what it always did, which is copy everything. So I think we would still need to hobble nix copy, because we still have the “force every copy to go through the bastion” problem.

Hi, just to follow up, I implemented this internally quite a while back, and it’s been working fine. I looked into a wrapper that looked at the command line, but it was getting complicated, so I went back to the plan of modifying nix-store.cc. It was much simpler than I expected, because I can just hardcode destinations which are ok, and restrict all the others, so no messing with protocols or anything needed. Just in case anyone was curious.

2 Likes