Distrobox nVidia

For the life of me I can’t seem to get a GPU accelerated distrobox going. The GPU works fine on the host, but I have a proprietary crapplication that needs to run in an ubuntu environment. Chitubox for those interested.

I’ve tried stable 23.11 and unstable distrobox.

$ distrobox --version
distrobox: 1.6.0.1

$ glxinfo | grep renderer
OpenGL renderer string: NVIDIA GeForce GTX 1070 Ti/PCIe/SSE2

test distrobox created with:
$ distrobox-create --nvidia --name test --image ubuntu:22.04

$ distrobox-enter test

$ sudo apt install mesa-utils
$ glxinfo | grep renderer
OpenGL renderer string: llvmpipe (LLVM 15.0.7, 256 bits)

possibly Relevant bits of config:

virtualisation.docker.enable = true;
virtualisation.docker.enableNvidia = true;
users.users.me.extraGroups = [ "networkmanager" "wheel" "video" "docker" ];

Thanks for your eyes :slight_smile:

I believe that something is broken with the --nvidia flag at the moment on Nixos (23.11)

Yeah the --nvidia flag is broken. Following the directions to use the nvidia container toolkit seems to work thought.

Just to add to this, it seems the instructions there might be outdated, I cannot seem to pull the nvidia/cuda image for the life of me, but moreover I was going to want to run an arch container anyway.
For anyone who might have the same issue, what seems to have solved it for me (I can now use offloading environment variables with container binaries to make it work as it should), was adding the “hardware.nvidia-container-toolkit.enable = true” to enable the container toolkit for nvidia (also adding the nvidia-container-toolkit package for the commands), then following the Podman instructions in Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.15.0 documentation, as I’m using distrobox with podman, I assume the same will work if you follow the docker instructions instead if you’re using that, meaning I’d have to follow the CDI instructions here Support for Container Device Interface — NVIDIA Container Toolkit 1.15.0 documentation.
Then to make a container with it working, I also did as they recommend, with the --additional-flags flag, making it match the flags used for the sample workload in nvidia’s guide, like so [–additional-flags “–device nvidia.com/gpu=all --security-opt=label=disable”]. I assume docker should work all the same if you’re using that.

TL;DR
For podman:

  • add “hardware.nvidia-container-toolkit.enable = true” to config file
  • add the nvidia-container-toolkit package to config file and rebuild
  • run “sudo nvidia-ctk cdi generate”
  • create a container for what you want with the proper flags 'distrobox create --name [whatever you want, I did arch] --image [whatever you want, I used Quay] --additional-flags “–device nvidia.com/gpu=all --security-opt=label=disable” ’
  • try nvidia-smi in the container

For docker (I didn’t test this, I can only assume it should work, someone correct me if it doesn’t):

  • add “hardware.nvidia-container-toolkit.enable = true” to config file
  • add the nvidia-container-toolkit package to config file and rebuild
  • running “sudo nvidia-ctk runtime configure --runtime=docker” and rebuilding (I only say that because resetting services in nixos was always hit and miss for me, but you can also try “sudo systemctl restart docker”)
  • create a container for what you want with the proper flags 'distrobox create --name [whatever you want, I did arch] --image [whatever you want, I used Quay] --additional-flags “–runtime=nvidia --gpus all” ’
  • try nvidia-smi in the container