Can't get Distrobox to work

Hi there! I am struggling with Distrobox, because I am not able to run the containers. I use Podman, but this happens also with Docker. I can run normal containers like Arch without any problems. This is only an issue with Distrobox. I am not very experienced with containers or NixOS, so any ideas are very much appreciated!

[simon@nixos:~]$ distrobox-create --name Arch --image docker.io/library/archlinux:latest  
Creating 'Arch' using image docker.io/library/archlinux:latest   [ OK ]  
Distrobox 'Arch' successfully created.  
To enter, run:  
  
distrobox enter Arch  
  
Arch  
  
[simon@nixos:~]$ distrobox enter Arch  
Container Arch is not running.  
Starting container Arch  
run this command to follow along:  
  
podman logs -f Arch  
  
Error: unable to start container "a37c4c77c8b14f4970cf3bea01432fa14da79f0e14fc1925fc7fa1ae1b397731": cr  
un: make `/home/simon/.local/share/containers/storage/vfs/dir/a6d27ced181b26285884257ffec9104071f3d1370  
3de0c5e3a14a2b9dfed30ed` private: Permission denied: OCI permission denied  
  
[simon@nixos:~]$

Check your permissions on ~/.local/share/containers. I tried it on both a NixOS system and Fedora system using Nix (flakes) and it worked both places.

You may also have to reset podman if you’ve changed the storage backend (be careful you don’t lose any volumes that you have data in).

1 Like

Indeed it was a problem with permissions somewhere in ~/.local/share/containers. Thank you very much :heart:

What are the permissions supposed to be? I compared ~/.local/share/containers permissions on my NixOS box against those on my Arch box and they are functionally the same (the user name is different). I’m getting the same error, but the permission seem okay to me.

Actually, though the error is the same, the output is not. I’m not getting as far as the OP.

$ distrobox enter Alpine
Error: unable to start container "bb882c012935c611a0bdf5498b96f8ee283778dec608a2780bc09098ea7d5557": crun: make `/home/nas/.local/share/containers/storage/overlay/5d9b7670ea46907dae3c4a833c614eb501ba533945483ccd2cee55c4d3513e3b/merged` private: Permission denied: OCI permission denied

Oh, I see. There are some bits that are for user 100000 deeper in. However, that still hasn’t fixed it.

$ sudo chown -R nas:users /home/nas/.local/share/containers/
$ distrobox enter Alpine
Error: unable to start container "bb882c012935c611a0bdf5498b96f8ee283778dec608a2780bc09098ea7d5557": crun: make `/home/nas/.local/share/containers/storage/overlay/5d9b7670ea46907dae3c4a833c614eb501ba533945483ccd2cee55c4d3513e3b/merged` private: Permission denied: OCI permission denied

Checking again, it seems as though distrobox enter Alpine changes the ownership of some things to 100000.

I tried making an Arch container, hoping that might work.

$ distrobox create -i quay.io/toolbx/arch-toolbox:latest -n Arch
Image quay.io/toolbx/arch-toolbox:latest not found.
Do you want to pull the image now? [Y/n]: 
Trying to pull quay.io/toolbx/arch-toolbox:latest...
Getting image source signatures
Copying blob 6cc55ea46ffd done   | 
Copying blob 444f393ce858 done   | 
Copying blob bad1a154f7d9 done   | 
Copying blob 86584fa290cd done   | 
Copying blob 14942c50b750 done   | 
Copying blob 2ae6caf89940 done   | 
Copying blob 7da672e31a5a done   | 
Copying config cbee9ee815 done   | 
Writing manifest to image destination
cbee9ee815e47512003cdac0e9351e91a824e66308611060a6e1a5b0d928c58f
Creating 'Arch' using image quay.io/toolbx/arch-toolbox:latest	 [ OK ]
Distrobox 'Arch' successfully created.
To enter, run:

distrobox enter Arch

$ distrobox enter Arch
Error: unable to start container "f0035f9d00807ac6fa352339137a09c52e4978572613f4c811db6811c67f8787": crun: make `/home/nas/.local/share/containers/storage/overlay/8b0bb6268fcf71c125b70c1901a2f05ac48c9f7b147ba726aa2e8289c4f7ea4e/merged` private: Permission denied: OCI permission denied

$ sudo chown -R nas:users /home/nas/.local/share/containers/

$ distrobox enter Arch
Error: unable to start container "f0035f9d00807ac6fa352339137a09c52e4978572613f4c811db6811c67f8787": crun: make `/home/nas/.local/share/containers/storage/overlay/8b0bb6268fcf71c125b70c1901a2f05ac48c9f7b147ba726aa2e8289c4f7ea4e/merged` private: Permission denied: OCI permission denied

It did not.

Reading about how to set up Distrobox and Podman on a Steam Deck (https://github.com/89luca89/distrobox/blob/b58202f649cccd6edc92c7c5072e7eac043b8d63/docs/posts/steamdeck_guide.md) it seems

$ sudo usermod --add-subuid 100000-165535 --add-subgid 100000-165535

would work. However, I’d like to do it the NixOS Way™.

https://search.nixos.org/options?show=users.extraUsers.<name>.sub*idRanges

However, last time I messed with my user settings, I rendered my VM unusable. Unlike my NAS, it didn’t have a root account for me to log into to fix that, but still, I’d rather not bork my user account.

Is something along that lines what I’m looking for? Something like:

users.extraUsers.nas = {
  subUidRanges = [
    { count = 1; startUid = 1000; }
    { count = 65535; startUid = 100000; }
  ];
  subGidRanges = [
    { count = 1; startGid = 1000; }
    { count = 65535; startGid = 100000; }
  ];
};

I decided to try this in the NixOS VM I use for experimenting. For some reason, I can’t reproduce this issue; distrobox isn’t setting ownership to user:group 100000.

I use Distrobox occasionally to run non-native NixOS things, and various folders in my .local/share/containers tree are owned by 100000. I never set up any subuids, so that’s probably not the problem.

Have you tried a podman system reset (this is destructive; back things up if you have pet containers)?

1 Like

I use Distrobox occasionally to run non-native NixOS things, and various folders in my .local/share/containers tree are owned by 100000. I never set up any subuids, so that’s probably not the problem.

Might you have a /etc/subuid file?

Have you tried a podman system reset (this is destructive; back things up if you have pet containers)?

I haven’t tried that command yet, but will in the morning. Thanks.

I do, but I didn’t create it manually or via any non-default NixOS configuration. NixOS defaults autoSubUidGidRange to true for isNormalUser users with no manually assigned subuid ranges.

1 Like

Huh, interesting. I had neither /etc/subuid nor /etc/subgid. I’ll still run the podman reset so as to start fresh there, but I think that’s where the breakage is.

Actually, that isn’t true. I did have those files and they are set appropriately. Hmm… Well, I intended to try podman system reset anyway.

$ podman system reset
WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all networks
        - all build cache
        - all machines
        - all volumes
        - the graphRoot directory: "/home/nas/.local/share/containers/storage"
        - the runRoot directory: "/run/user/1000/containers"
Are you sure you want to continue? [y/N] Y

$ distrobox create -i quay.io/toolbx-images/alpine-toolbox:latest -n alpine
Image quay.io/toolbx-images/alpine-toolbox:latest not found.
Do you want to pull the image now? [Y/n]: 
Trying to pull quay.io/toolbx-images/alpine-toolbox:latest...
Getting image source signatures
Copying blob b6f1a98f98c5 done   | 
Copying blob 35645b35d07f done   | 
Copying config bab733b2c9 done   | 
Writing manifest to image destination
bab733b2c9892129af245246de4a5cb1991b1be6cb0151230140aa90fda62dc0
Creating 'alpine' using image quay.io/toolbx-images/alpine-toolbox:latest	 [ OK ]
Distrobox 'alpine' successfully created.
To enter, run:

distrobox enter alpine


$ distrobox enter alpine
Error: unable to start container "dcaaee77701b53f9c97c32801e2c9972f7a32a535f32290d33f41610d7a8ca0b": crun: make `/home/nas/.local/share/containers/storage/overlay/0432a57eec1bbc1396092586bb278b0d57a6acef23f279f606e7c598c9c689bd/merged` private: Permission denied: OCI permission denied

$ cat /etc/sub*id
nas:100000:65536
nas:100000:65536

podman logs -f alpine returns nothing.

I don’t think this is a podman issue:

$ ls -l ~/.local/share/containers/storage/overlay
drwx------ 1 100000 100000  66 Feb 17 12:27 0432a57eec1bbc1396092586bb278b0d57a6acef23f279f606e7c598c9c689bd
...
$ ls -l ~/.local/share/containers/storage/overlay/0432a57eec1bbc1396092586bb278b0d57a6acef23f279f606e7c598c9c689bd/
ls: cannot open directory '/home/nas/.local/share/containers/storage/overlay/0432a57eec1bbc1396092586bb278b0d57a6acef23f279f606e7c598c9c689bd/': Permission denied

Unless I’m mistaken, I should be able to look inside that directory. (Yes, I am mistaken.)

In case it matters:

users.users.nas = {
  isNormalUser = true;
  description = "NAS";
  extraGroups = [ "networkmanager" "wheel" ];
  packages = with pkgs; [];
};
$ podman version
Client:       Podman Engine
Version:      4.7.2
API Version:  4.7.2
Go Version:   go1.21.5
Built:        Mon Dec 31 19:00:00 1979
OS/Arch:      linux/amd64
  virtualisation = {
    podman = { # Needed for distrobox
      enable = true;
      dockerCompat = true;
      defaultNetwork.settings.dns_enabled = true;
    };
  };
$ uname -rv
6.7.4 #1-NixOS SMP PREEMPT_DYNAMIC Mon Feb  5 20:17:12 UTC 2024

distrobox enter --additional-flags "--privileged" alpine doesn’t work either.

1 Like

Any sucess with distrobox ? Im having this same issue nothing. I did everything you sugest in this conversation.Im getting the permission message.

Of the three machines I’ve tried this one, only one ever gave me trouble. It works fine without hassle on my desktop and in my testing VM. But on my NAS? Nope. I have no idea why. I try every now and then, but without any success.

Me too. Im my offices PC its working fine with same flake.

The real kicker is the NAS is the cleanest, and simplest of the three. The VM is a mess since that’s what I use for testing stuff. It breaks, and it breaks hard. My Desktop is my daily driver and more complicated if only by virtue of not being headless, to say nothing of coding and gaming on it.

I dont know if will help someone but I found a solution in my case. This problem only ocours if I use the GUI installation from the installer, then run flake to reinstall from inside the SO. To Solve this I use nixos-generate-config to install then reinstall using flake after reboot into SO.
I was doing this because if I install directly from flake the kernel wasnt find the disk.