How to secure NixOS after the attack to PyTorch?

Many PyTorch users were compromised by a malicious dependency associated with PyTorch that stole data from $HOME. It’s not something specific to NixOS or Nix, but to any PyPI users. This attack is explained in PyTorch and a Hacker News discussion. It’s an interesting read.

How can we secure our systems against this and other attacks? I think NixOS could fare better than other systems against similar attacks as NixPkgs is declarative and centralized. Furthermore, some security options are convenient so users are more likely to turn them on, but we should document this better.

I have been intending to write an article about this topic in Any ideas welcome. Some suggestions below:

  • Sandbox all packages with e.g. bubblewrap or similar. Integration with nix is not great right now?

  • Use a host-based firewall, e.g. opensnitch. This is well integrated, just a matter of setting a few options.

  • Only install packages from NixPkgs, avoid secondary package managers. How much auditing is done on NixPkgs? What auditing tools exist?

  • Use AppArmor or SELinux. AppArmor is simpler, there’s some support on NixOS, but it’s not well documented.


Well as far as I understand Nix is not vulnerable to this attack as it exploits the fact that pip prioritizes Pypi over a private dependency… but in Nix all dependencies are explicitely pinned and I can’t see any malicious dependency in pytorch’s derivation. Am I missing something? Of course if one bypasses Nix and directly uses pip to install their software this argument don’t apply, but pip is quite hard to use in NixOs and in that case Nix can’t do anything.

That said, I guess that it is always a good idea to try to improve security. I actually like the approch of Qubes (and the NixOs alternative SpectrumOs that uses virtualisation to segment the system in groups of applications, so that if an application is corrupted it can only affects applications/data in the same group. It would be great to provide this directly in NixOs (of course each user should be able to choose if they want to use it or not and for which part of their system). See also this discussion Qubes-lite with KVM and Wayland - Thomas Leonard's blog


This is impractical and creates weird and hard to debug errors if you don’t know about this.

Source code auditing? None.

and the module could be improved at various places which I don’t remember right now.

Practical things we can do:

  • don’t fetch sdist’s because people can upload whatever
  • implement a tool which checks for yanked sdists
  • improve vulnix

What is the alternative then? I am not suggesting there is sandboxing by default, but this usecase should be more streamlined.

I was using Firejail, before migrating to NixOS, to sandbox many applications and this prevented at least one attack in my organization. We ran a malicious binary, but since the binary was sandboxed and could not read most directories in $HOME, the attack was not successful.

Clearly the current security model of Linux is not enough. Defense in depth is important. If the attacker bypasses several control mechanisms, a sandbox is valuable as a last resort. I know Firejail is not ideal because it has a large attack surface. Perhaps either bwrap or other options should be offered to run things inside a nix shell with some enhanced security policies.

AFAIK, guix has introduced a similar mechanism.

1 Like

There is none that can universally be applied. Otherwise users could not edit or open files in their home directories.
The nix store is read-only, so at least it can’t change system files.

That has problems for example with /etc/nixos not appearing unless --try-bind is added and the directory disappears for the application. Really hard to debug if you don’t know that.

The problem is that all those tools cannot know if the file I am trying to write into home is malicious or not.
This is easier for services and we already apply a lot of hardening to them. Compared to Debian, etc. we are miles ahead in secure defaults.

They also need to do this differently because they don’t use systemd. We can just leverage systemd and namespaces for most of this. This should save us a lot of time.

I think you don’t understand how e.g. Firejail works or my points.

I am not suggesting to make things impractical, just to offer the possibility of limiting how much programs can do. For example, if you are developing certain project with Python, it makes sense to run e.g. the python binary so that it sees a limited amount of $HOME.

In fact, Firejail comes with profiles that require almost no user intervention. For example, the default profile for Firefox limits what Firefox can see to a downloads directory.

1 Like

Somehow funny that I found this NixOS article after reading more into the MicroOS / Btrfs, the new ALP Prototype is Evolving, Proof of Concept Expected in Fall | SUSE Communities
and Kinoite using (RPM)OStree Fedora Kinoite User Guide :: Fedora Docs

Somehow this hasn’t gotten the attention it deserved I think, sure it needs some polishing but the idea is great AND declarative!

1 Like

This is indeed an interesting read, and discusses the problem I mentioned in my first post.

It’s important to have an easy way to run software with limited privileges, e.g. let it see some directories only or forbid usage of network devices.

Guix recently introduced a shell command, similar to nix shell, but with a –container flag. The key here is that containers are isolated and do not have access to network devices and do not see any of your directories. Unless you give them permission to, via some --network and --share flags.

This would limit any damage done by compromised executables. What’s the easiest way to achieve the same in Nix, systemd-nspawn?

1 Like

Probably bwrap with less directories than nixpkgs/pkgs/build-support/build-fhs-userenv-bubblewrap at master · NixOS/nixpkgs · GitHub

So uploading a random file not downloaded from Firefox from the Downloads directory would no longer work? Also things like firejail/firefox-common.profile at 9cfd0921fd41dbda35eecad73d53984372078142 · netblue30/firejail · GitHub make this an unattractive default.
Also if the debugging experience is anything like systemd’s service hardening where you just need to know which setting could cause the error (or sometimes not even that and things are just not working) instead of receiving a transparent and informative error message or system log, makes this not a good default.
If we could set this as a default we could protect the most systems and users, where an opt in will probably only reach a small percentage.

If you are developing a service I would suggest, to just use systemd’s options. For most tools run by users limiting what the tool or interpreter can see in the home directory is not practically.

This kind of attack is known for a while and similar events happen on pypi almost monthly. Using FODs and having no network in the build sandbox is already a good protection to start with.

I guess Linux’s distributions are not really designed to deal with malicious applications… In my opinion I think that Android provides better isolation of the applications, where each application must request access to a resource when they need it (camera, file…). As I understand, in linux the closest alternative I can think of is a model like Qubes/Spectrum, where you need to explicitly copy files from one sandbox to another, but it does add a bit of complexity, is less flexible than android (you can’t authorize access to a resource like camera while the program is running as far as I know) and it can have some overhead.

macOS strikes a great balance here (to a degree). When a (to macOS) new app tries to access i.e. your Documents or Photos, you will get a pop-up asking the user for permission. The app’s IO blocks until the popup is dealt with. If the user accepts, the program is able to access the directory. If not. it gets an IO error.

For example, if I wanted to run fd . foo via bash running in, macOS would prompt me whether I want to allow to access Documents, Downloads, Photos etc. as fd recurses through the directories.

It’d be great if we could restrict access to (user-configurable) “important” directories by default and then allow granting permissions in such an ad-hoc fashion similar to how macOS does it. If you wanted to upload a file from some protected location through Firefox, you’d be prompted and could grant it. Ideally, you’d be able to grant permission “only once” like Android’s “Open with…” dialogue.
(In the case of uploading a file, that should actually be handled via the file picker portal; which we’d obviously allow to access the whole home directory by default.)

That would be great to have in Linux! (or to enable it on a per-application basis) Do you know how this is implemented, and how easy it would be to port it to linux? (like do you need to rewrite the kernel, add a kernel module, or use existing functionalities provided by the kernel like ptrace?)

I have no idea but I’d assume you could build something like that on seccomp and the like.

1 Like