How to secure permissions on sops-nix age key file?

I am finally setting up sops-nix for my dotfiles. However, I am confused about how best to secure my age key so only root or a “sops” group may look at it. Here is my current sops.nix to explain:

{ config, inputs, ... }:
let
  ageKeyFilePath = "/home/${config.defaultUser}/.config/sops/age/keys.txt";
in
{
  imports = [
    inputs.sops-nix.nixosModules.sops
  ];

  config = {
    sops = {
      defaultSopsFile = ../secrets/main.yaml;
      age = {
        # Do not auto-derive from SSH key, please.
        generateKey = false;
        keyFile = ageKeyFilePath;
      };
    };

    # Ensure no one but root can read and edit... 
    # TODO: how to just allow `sops` binary to read this? 
    systemd.tmpfiles.rules = [
      "f ${ageKeyFilePath} 0770 root root"
    ];
  };
}

The reasoning behing restricting ~/.config/sops/age/keys.txt is that I do not want just any program I run to have access to this file. I only want sops to be able to read it or root user/group. I think this is a good take? I have yet to see this anywhere though, so maybe I am doing something weird…

Here is the error I receive when trying to run sops secrets/main.yml to edit it now:

> sops secrets/main.yml
Failed to get the data key required to decrypt the SOPS file.

Group 0: FAILED
  age17muvhm5l3khzd2n5m64rge329n672sg20jad0u58rugvtwswhp6s6dfmd3: FAILED
    - | failed to load age identities: failed to open file: open
      | /home/x/.config/sops/age/keys.txt: permission denied

Recovery failed because no master key was able to decrypt the file. In
order for SOPS to recover the file, at least one key has to be successful,
but none were.

Before I go further, should I be using a different approach? How do I make sure that other programs can’t just steal my keyfile?

I could not find info about keeping keys safe in the age readme:

I am thinking this is a silly question, but not sure what to read or look at next. I am in general confused about how to keep private keys secure while logged in (not at rest! disk encryption is next!). I went through all this set up, but now any small program I run could potentially access this filepath and steal my age key :worried: .

My best guess for moving forward is to make sops an SID wrapper that is in a “sops” group, then to put the keyfile as readonly in the sops group. Then, when I run sops-sid-wrapped, it can access the key for decryption only.

chown and chmod. Although that’s moot, that key is in your home folder which is only readable by your user.

You shouldn’t be using that key for runtime decryption, though, use your host’s keys instead.

2 Likes

Yup, agreed. Host keys for decryption and a user key for yourself for maintenance and editing. I’ve also moved to GPG key on a yubikey and have been happy with that.

1 Like

Thank you. It being in my home directory was limiting me. This config works as I want:

{ config, inputs, ... }:
{
  imports = [
    inputs.sops-nix.nixosModules.sops
  ];

  config = {
    environment.sessionVariables = {
      SOPS_AGE_KEY_FILE = config.sops.age.keyFile;
    };

    sops = {
      defaultSopsFile = ../secrets/main.yaml;
      age = {
        # TODO: follow Emergent Mind tutorial to set this up
        generateKey = false;
        keyFile = "/var/lib/age/keys.txt";
      };
    };

    systemd.tmpfiles.rules = [
      "f ${config.sops.age.keyFile} 0640 root root"
    ];
  };
}

Now, random programs may not access my keyfile :raised_hands: ! I can also edit the secrets using sudo sops secrets/main.yaml. However, you say “use your host’s keys instead” for runtime decryption. That seems like a way better solution. After re-watching this video, it finally makes sense:

So the age key file should be something like on a USB stick, and should be used sparingly or as a backup probably, like this video suggests at 21:43 for GPG keys:

Thank you very much @waffle8946 and @Michael-C-Buckley!

I am having a second yubikey come in soon, so I am excited to set that up, too.

How you store it is your business / up to your threat model, but it’s ideally only used when you need to add/modify your secrets, yeah.

Yup, this. You can use a USB if you use something like a Yubikey with a gnupg key on it. For me it’s more about convenience than anything else, since I bounce between several computers and don’t want excess keys floating around.

Chiming in a few days late, but it seems nobody has actually talked about that part of the original issue.
I fully agree that moving to host keys for encryption is better (it’s what I do), and it does solve the issue here, although concerns about key lifetimes are valid IMHO.
However the idea of “I want only specific programs to have access to this file” which was the original approach does deserve a bit of attention too (again IMHO).

The problem here is the permission model of unixoid systems (though it’s not exclusive to them).
This permission model is concerned about users and groups, rather than individual applications.
Any program you run as your user, for all intents and purposes is your user, by which I mean that any accesses the program makes it does so on your behalf, meaning it has the same permissions as any other program run that way.
This is fundamentally pragmatic; if you run cat ~/my/secret/file in one terminal and run the same command in a different terminal, both of which you are logged in as your user, you would want the output to be the same, generally speaking.
There is no real distinction between any single program here.

Programs like sudo (referred to as suid usually) which allow privilege escalation are pretty much the same in that regard, the difference here is that the programs are marked to be “executed as the user owning the file”, which is a special feature for that exact purpose, but they do have access to who called the program and then make a distinction based on what command you want to call, what user you are, and what’s in their config. Again, the resulting execution is running as the user you specified (in case of sudo it’s root by default), which means there is no distinction on what program is being run there either, it’s just that sudo checks what you want to run before actually running it. This is why allowing anyone to run “only” sudo vim /some/file is fundamentally broken, because vim allows you to run arbitrary subcommands (e.g. r! cat /etc/shadow) which are then run as that same other user. That’s why visudo exists as a wrapper (it copies the file using higher privileges, then edits it as your user, and then copies it back using the higher privileges), such that the editor part is not run as a privileged user.
So what’s usually done is now disallow one command or process access to something, but to just move that entire thing to a different user (a specific one for the service such as nginx running under the nginx user, or just using root), and then giving users who need access a small way to get just that one thing done.
The one thing sudo cannot do is restricting by parent command either though.
If you have passwordless sudo, then Firefox could technically also just shell out and run sudo cat /etc/ssh/ssh_host_ed25519_key, you just trust it won’t.
The only thing you can do about that is using a password for sudo, and make it so nothing can sniff your keyboard inputs (i.e. use wayland rather than X11), but even that has its limits.

With that chunk of somewhat related information out of the way, there are mechanisms to allow or forbid access to files based on more than just who you are.
These are security frameworks such as seccomp and apparmor, which are kernel-level frameworks that allow to have some influence over permission decisions within the kernel. Those frameworks are a pain to set up properly for such a “minor” use-case though (since you have to make sure they apply to any and every program except those you choose, etc.), they’re usually used to limit an entire container to its own devices which is a much more general thing. They also, by design, usually make it easier to disallow one program some access rather than disallow all and then override that for only some programs.

Now that I’ve mentioned containers, let’s talk about that for a moment; namespaces.
I personally use them heavily since cgroups and namespaces were becoming a thing about the time I started using Linux, and I kinda grew up with the whole OCI thing evolving around me, so I’m familiar with them.
What they allow me to do is to run a process in a separate “mount” namespace, giving it its own view of the filesystem.
I use this for sandboxing for instance; when I want to make sure that testing out a program doesn’t leave a bunch of files sitting in my home directory somewhere.
However it suffers from the same issues as the security frameworks in that regard, I’d have to set up the sandboxing for each process I run individually, meaning if I didn’t want Firefox to have access to that age key, I’d have to run Firefox in a namespace itself instead of.… like.… just running it.

This works pretty well though if you generally trust your desktop (i.e. your file manager, task manager, terminal emulators, etc.) and shove every “desktop application” in a namespace again; that’s basically what flatpak (which is essentially a package manager around bubblewrap, which I use for sandboxing myself) does.
Since flatpak does the heavy lifting of making sure every application is running in a sandbox, I can use its global settings to forbid access to some stuff (basically disallow access to my home directory globally and then let the xdg-portals handle the process of allowing access; the filechoosers run outside the namespace and allow me to pick a file which is then transparently exposed to the underlying namespace, and only that file is exposed).

There’s also firejail which does essentially the same but more tailored towards sandboxing, meaning it doesn’t do the package management, instead running firefox through firejail will automatically find all of the binaries that it needs access to and only expose those, meaning that it does the “only what I give you access to” thing very well by default (it also interacts well with special APIs like the graphics card stuff).

However, all of those still mean “your user” has access to everything, and you have to take steps to “detach” an application from your user (in the case of mount namespaces, by also using a user namespace which you need anyway to use it as an unprivileged user, use a subuid which does not have permissions on the regular host, and then only mounting into that namespace the things it should have access to with appropriately mapped permissions).

One thing I’ve personally been very excited about are capability based systems. Linus Torvalds himself has often said that Linux isn’t the pinnacle of operating systems, and that in a few decades (well, he said this a few decades ago) something else with a different paradigm will come around and supplant Linux eventually.
Capability based systems are probably just that, and we’ve seen with Android that it is in fact “a thing”.
Think of an Android smartphone for a second; it does exactly what you described, there is your user logged in, but every application has access only to its private share of the storage, and any access outside of that runs through a permission systems that allows you to say No (well, technically that could be the way, but Google probably has some reservations about allowing users to actually micro-manage the permissions of apps; if I was able to allow or deny every single network access by an application I could effectively just turn off Google’s main source of revenue; ads and tracking, besides doing that in a user-friendly way would be a challenge).
Android still runs Linux under the hood, but with heavy modifications in lots of ways so that’s not really practical for desktop use.
However, and no matter how much I hate Google personally, they build interesting stuff; Fuchsia for example.
Taking a different kernel (Zircon) instead of Linux, which is capability based does kind of allow such a new paradigm.
In that model every process inherits its access rights in every way from its parent (though the parent can of course restrict those permissions at time of execution of a different process).
This means that things are no longer bound to your user, instead everything is now bound to the process itself, making the “I do not want just any program I run to have access to this file” possible.
It still works much in the same way as namespaces, but a lot less cumbersome since it’s baked into the system, and also more generically.
Suddenly you can not just restrict based on the filesystem hierarchy, but also in terms of what programs are able to see otherwise; a parent process can inject a proxy DNS resolver into the child process making it so you can run “this one application with a different DNS resolver” without having to modify the application to use a different hard-coded address.

However that’s a future we’re still pretty far from, and given the sheer size of Linux or Windows in terms of amount of driver code and edge case handling we’re probably not getting there soon, at least not on the diverse ecosystem of desktops (smartphones are different since they are a walled garden of sorts where users barely have any control about the system itself, so it’s less surface to cover).

So, to sum it up; the idea of restricting individual applications from accessing a certain file without your permission is technically possible on Linux, but it is a lot of setup that, seeing as the usual way of going about those issues is “create a new user and only give that user access” does usually work, and in your case it was easily possible to disallow your user access to the file that way, doesn’t pay off most of the time.
It is possible though, and a lot of people are very excited about capability based systems which would basically make this approach the default (okay, maybe it’s just me).

Sorry for chewing your ear off, everyone who’s subscribed to this thread ^^"

2 Likes