Using NixOS in an isolated environment

Hi! I will work on a scientific base on Kerguelen Islands for a full year from next November. There, internet access is provided through a low-bandwidth VSAT connection and reserved for professional activities, so I won’t be able to use it to install software for my personal projects.

I want to use this constraint as an opportunity to study the use of NixOS in an isolated environment. My goal is to be able to update my configuration, install new software and deploy NixOS machines without relying on an internet access.

What I already know

So far, I know that setting up a local NixOS channel is as easy as getting I can also copy the corresponding binary cache locally by running:

curl -L \
    | xz -d \
    | xargs \
      nix copy --store \
               --to file:///path/to/nixos-19.03/

Now, that works great if I want to install something that is in the binary cache. And here start my questions.

What I want to know

  1. What exactly is included in the binary cache for a full channel, like nixos-19.03?

    On this first question, I know that any variant of a derivation—by updating its compile flags for instance—is not included. However, its source is cached so I can build it, so it’s not an issue.

  2. For what is not included, are the sources cached?

    If a source is cached, I can build it in an isolated environment, so that’s great.

  3. How can I cache the full set of sources?

    This would permit to install anything that is referenced in nixpkgs, which would be pretty neat. However, I don’t know if it is possible, and how much space it would require.

  4. What about the different architectures?

    How are managed the different architectures, like x86_64 vs aarch64? I would like to be able to setup aarch64 machines too if I want.

Any insight on these subjects is well appreciated :slight_smile:


I would be really interested in how big that cache would be (without sources and with sources).

The sources are also just derivations and as such they are also cached. One possible starting point could be to write a nix functions that walks over all derivations/attributes in the package set and and extracts src.outPath from each of them. You can then create a list and feed that to your nix copy solution above. However, this might miss some fetch* statements that are not directly placed as src attributes.

Another possibility may also be to generate a list of package that nixpkgs contain, and then, for each package, evaluate the package (without compiling, with readonly), with a modified fetchurl function that output the source and the hash of the url. You can then fetch these path with a script.

I’ve finished simple Nix copies for nixos-19.03-small and nixos-19.03, the sizes are here, showing it is not that big:

NAME                                    REFER  COMPRESS  RATIO
helios/test/cache                       1,11M       lz4  1.01x
helios/test/cache/nixos-19.03           71,0G       lz4  1.01x
helios/test/cache/nixos-19.03-small      572M       lz4  1.02x

However, I don’t know (yet) the proportion of what is not included.

@tilpner sent me some scripts on, and notably this repository. I’ll dig into it, maybe there are some interesting things there.


:open_mouth: 71 GB sounds pretty small but those are then all xz compressed nar files. Looking at the output paths this looks like it is only final build products but no sources (which should be sufficient to install packages on a running system?).

I am actually in a similar situation (want to set up a local cache for a location with horrible internet connection) . I wondered if I should mirror as well. What would be the preferred method for that?


I think it would be good to have one indeed. The full must be really big though: what we need is only the tarballs for a given version of nixpkgs. For this, we want to get all the external files referenced by all the derivations we want to cache, then download them.

I haven’t worked on it lately as I was on vacation, and now I still need to do a lot of other things. I try to advance on this subject in parallel to the other parts of my preparation.

The blocking points for me right now are:

  1. For a given derivation, how do we know its external dependencies?
  2. What is the set of derivations that we want to include? For instance, many packages in texlive.combined.scheme-full are not in the binary cache, but this is typically something that would be interesting to have in the tarballs. Same for beam.packages.erlangRxx.*: not everything is pre-compiled with every erlang version available here, but I’d like to have the sources so I can compile them offline.
  1. For a given derivation, how do we know its external dependencies?

I would do a nix-store -q -R to find all the dependencies, then do nix-store -q -b outputHash to check which ones are fixed-output. Hopefully Nixpkgs checkout + all fixed-output dependencies should be enough.

  1. What is the set of derivations that we want to include? For instance, many packages in texlive.combined.scheme-full are not in the binary cache, but this is typically something that would be interesting to have in the tarballs. Same for beam.packages.erlangRxx.*: not everything is pre-compiled with every erlang version available here, but I’d like to have the sources so I can compile them offline.

I guess you would need to write a more aggressive version of release.nix… Basically, you tryEval the Nixpkgs (this should succeed) and find out all the attribute names in the package set. Then for every attribute you tryEval it and check if there is outPath (if yes: one more derivation found, most likely) and also list all its subattributes to check later. Rinse, repeat. Of course you’d better check that the data values are attribute sets before recursion.

1 Like

Until one will need a new package, for example to set up a VPN to get access to the network (I guess it is exactly the case of recent post of Rebuild NixOS Offline as I hit this problem few times as well).

Finding ALL possible fixed-output derivations (even not included to release.nix) looks doable. Challenging but doable.

And besides solving the problem with isolated environments it would be useful to have a bot to check if fixed-output derivations still have valid hash.
(I bet that fetchurl { url =; sha256 = ... } does not, it is updated monthly. And there must be many other like this. It would be useful to have a list of problem spots, published on Hydra or ryatm-bot)


Adding here to the idea here to use tryEval over nixpkgs:

  1. use nix-instantiate to then instantiate all derivations. This should give a list of all relevant derivations.
  2. iterate with nix-store -qR over all drv files to get all input/dependency paths (which includes all sources).
  3. Download all of the store paths from step 2 with nix-store --realise from the cache.

That should put all derivations in the store path to rebuild all packages offline(?)

1 Like

I’m told the sum total of derivations in the caches it 180TB. It’s never been garbage collected. :smiley:
Don’t forget to have a checkout of the nixpkgs repository too!

  1. iterate with nix-store -qR over all drv files to get all input/dependency paths (which includes all sources).
  2. Download all of the store paths from step 2 with nix-store --realise from the cache.

Erm, you really want to select only the fixed-output derivations — there are quite a few long-build packages that are also not in the cache.

Actually, it is a good question whether one wants to include the sources of previously-working broken packages…

I am afraid a single nix-instantiate run would not be enough, some FOD may be excluded by
if stdenv.isAarch64 then fetchpatch {}
if config.use_pulseaudio then fetchpatch {}

The task is closer to test coverage or fuzzers


That would address one the original questions: how do you get everything into your store so that you can re-build a random package of your choice? For that I need all FODs, right?


OK, so even this one may still miss some FODs.

sidenote: this could be useful in high security contexts where you don’t want to ever connect a machine to the network?


So what’s the status here currently?

1 Like

I’m working hard on a specific mirroring tool. It’s not ready yet and there is no 100% certainty it solves the task fully. For example, this comment makes me think it’s far from done. Currently, the README does not the reflect the current state, there were lots of changes since its last update! Also, mirroring is no more something the project is focused on.


Hello everyone!

After more than six months using NixOS in an isolated environment, let me share my experience and new questions.

Initial setup

Last november, I have created a mirror for the current nixos-19.09 channel, and nix-copied the associated store paths to build a local binary cache. This was achieved using this script:


set -e

if [ $# -ne 3 ]; then
    echo "usage: $0 <channel> <local_path> <local_binary_cache>"
    exit 1

cache=$(curl -sSL $channel/binary-cache-url)

printf "\n\e[32m=> Create a local channel in $mirror...\e[0m\n\n"

# Build the local channel.
mkdir -p "$mirror"
curl -L $channel/git-revision > "$mirror/git-revision"
curl -L $channel/nixexprs.tar.xz > "$mirror/nixexprs.tar.xz"
curl -L $channel/store-paths.xz > "$mirror/store-paths.xz"
printf "file://$mirror_cache" > "$mirror/binary-cache-url"

printf "\n\e[32m=> Copy the binary cache to $mirror_cache...\e[0m\n\n"

# Build the local binary cache.
mkdir -p "$mirror_cache"
xzcat "$mirror/store-paths.xz" | xargs nix copy --store $cache --to "file://$mirror_cache"

printf "\n\e[32m\e[1mThe mirror is up to date!\e[0m\n\n"

Once this mirror has been ready on my machine, I have switched my nixos channel to it and added this to my configuration.nix:

# Nix configuration for offline mode.
nix = {
  binaryCaches = lib.mkForce [ "file:///data/Mirroirs/nixpkgs/cache" ];
  gc.automatic = false;

Then, I rebuilt my system, and was ready to go offline for a year.

What works out of the box

Everyday software installation on an already-working system

Each time I have needed new software on my machine that is already present in nixpkgs, rebuilding my system using the local binary cache worked smoothly. I’m sure there are some edge cases where an internet access would be required, but I have not encountered any of them yet.

New server-like system setup

Once arrived in Kerguelen, I have copied my local mirror to a server in our network to make it easily available in the network. Installing a NixOS server instance for test purpose was pretty straightforward: I just needed to configure the channel and substituters in the installer, and all worked as expected.

When things are missing

Things have become more complicated since I wanted to setup machines with a desktop environment: some store paths are missing from the local binary cache. Fore some reason, Nix wants to rebuild some things from source, then need the sources and patches for OpenSSL, Gettext and other few software.

As a workaround, I use nix-serve to share my Nix store, which contains the missing paths, allowing me to deploy new desktop machines, but this feels highly hacky.

I have then a few questions regarding these missing paths:

  1. How to get a list of what store paths are needed by a configuration but are missing from a given binary cache?

  2. Why are some comonly-needed store paths missing from the nixos-xx.xx channel? I mean, a configuration with a desktop environment is quite common, and I don’t remember building anything from source when connected to the internet: that means the missing paths do exist in somehow, but are not part of a channel. Is this possible?

Before to go any further, I would like to be able to create a binary cache containing everything needed to setup new « standard » desktop machines. Currently almost everything is there, but these tiny missing things are a blocker.

Another case that needs my Nix store as a complementary binary cache is the setup of the first generation for home-manager. I can setup a local mirror for its channel so I can install it on machines without internet, but some things are missing to build the first generation. Afterwards all works OK with my incomplete binary cache.

System upgrades

I am able to get new versions of nixpkgs and my binary cache mirror every few months, when the Marion Dufresne brings us postal mail. Last time was in April, next time will be in September, but I need to prepare everything for the parcel to be shipped from Metropolitan France before July, 15th. I have access to a distant NixOS machine through our very slow and high latency VSAT link, from which I am able to build new versions of my binary cache.

Provided that my initial binary cache was incomplete, I have done several things when preparing for my first update in February:

  1. I have created a mirror for the new version of NixOS 19.09 available, as incomplete as the first one,

  2. I have built my configuration on the distant NixOS machine, using the mirror channel, and exported its output closure:

    # Add the mirror channel.
    sudo nix-channel --add file:///path/to/mirror mirror
    # Build and export the output closure.
    nix build -f '<mirror/nixos>' system --arg configuration path/to/configuration.nix
    nix-store --export $(nix-store -qR result) > path/to/export.deps
  3. To be sure I would not miss anything, I even built a full cache, including the build derivations:

    # Instantiate the configuration and build the cache.
    nix-instanciate '<mirror/nixos>' -A system --arg configuration path/to/configuration.nix --add-root drv --indirect
    nix-store --realise $(nix-store -qR --include-outputs drv)
    nix-store --export $(nix-store -qR --include-outputs drv) > path/to/export.cache
  4. I repeated this for my home-manager configuration, and some custom default.nix files for development projects environments.

    NIX_PATH=nixpkgs=/nix/var/nix/profiles/per-user/root/channels/mirror:/nix/var/nix/profiles/per-user/root/channels HOME_MANAGER_CONFIG=$PWD/jpc.nix home-manager build
    nix-store --export $(nix-store -qR result) > path/to/export.deps
    nix-store --realise $(nix-store -qR --include-outputs $(nix-store -qd result))
    nix-store --export $(nix-store -qR --include-outputs $(nix-store -qd result)) > path/to/export.cache

In April, when the SSD containing the new mirror and these store exports arrived in Kerguelen, I first tried to rebuild my system using only the updated local binary cache. Thing where missing, as expected, so I did a sudo nix-store --import < path/to/export.deps and everything worked as expected.

This ability to build a configuration somewhere, export it and import it in some other store kind of amazes me. This is a really powerful feature of Nix. However, this is still a workaround for my problem, and it works only when you know in advance which configurations are present in the isolated environment.

I’d prefer to build a binary cache that already contains what is needed for most desktop configurations, and try this enhanced binary cache in september to see what is still missing—hopefully only edge case things related to my specific configuration.


Ah, store-paths.xz.

As I maintain the TUNA Nix mirror (I maintain the download script, TUNA provides the server), I also need to copy entire channels. I have also used store-paths.xz but as you have seen it only includes almost every path, not really all of them. For this reason in the help page

It seems that store-paths.xz is generated by Hydra, corresponding to the jobs of a channel, and posted by this script, whereas consists of every(?) store path Hydra produces.

I haven’t dug much further than that though.