Looking for help setting up VirtualGL with TurboVNC

Well, almost 2 months later of working on this I have found a solution. So anyone else out there reading this, this has been a rabbit hole. I even attempted to hire a developer twice to try and help solve this issue, but in both cases they gave up after a few days of trying themselves.

Here in the configuration.nix solution:

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
      pkgsi686Linux.virtualgl

    ];

    environment.sessionVariables = {
      LD_LIBRARY_PATH="/run/opengl-driver/lib/:/run/opengl-driver-32/lib:${pkgs.virtualglLib}/lib:${pkgs.pkgsi686Linux.virtualglLib}/lib";
    };

    boot.extraModprobeConfig = ''
      options amdgpu virtual_display=0000:01:00.0,1
    '';

    services.displayManager.autoLogin = {
      enable = true;
      user = "testuser";
    };

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

    networking.firewall.allowedTCPPorts = [ 5901 ];

Now to go over what this is doing, and to reference the sources that finally got me to the solution.

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
      pkgsi686Linux.virtualgl
    ];

Mostly self explanatory, but notice the “pkgsi686Linux.virtualgl” package, as that was required in order to run any 32bit programs while using this VirtualGL system. If you are only running programs the same architecture as your system, you can leave that out.

    environment.sessionVariables = {
      LD_LIBRARY_PATH="/run/opengl-driver/lib/:/run/opengl-driver-32/lib:${pkgs.virtualglLib}/lib:${pkgs.pkgsi686Linux.virtualglLib}/lib";
    };

This part is required so that VirtualGL can find the libraries correctly. Without this, vglrun will fail immediately complaining about being unable to find the LD_LIBRARY / LD_LIBRARY_PATH properly.

    boot.extraModprobeConfig = ''
      options amdgpu virtual_display=0000:01:00.0,1
    '';

Now this was the biggest hurdle. Since I need this to work on a headless system, I can’t just plug in a dummy plug to a computer I don’t have access to. I had attempted to go down a significantly more complex path of creating a virtual screen using Xoth configs, but the room for error was extremely high and I did not achieve any success blindly trying to get it to work, as nearly all examples I searched through were for Nvidia GPUs, whose configs obviously wouldn’t work for me without significant changes, but I’m not at all familiar with Xorg configs to know how to change them to work. After weeks of seraching I managed to find this blog post: Setting up GPU accelerated remote desktop with sound on headless Linux using NoMachine – mightynotes Which wasn’t as helpful as I hoped, however there is an extremely enlightening note half way rhrough. To quote it:

“If you’re using an AMD GPU with AMDGPU driver, consider yourself lucky. AMDGPU supports virtual display out of the box. Edit /etc/modprobe.d/amdgpu.conf and add the line options amdgpu virtual_display=xxxx:xx:xx.x,y to it. Where xxxx:xx:xx.x is the PCIe device ID (the thing you get from lspci) and y is the screen ID (starting from 1) attached to the card.”

There is the key piece to all of this, from the link they provided the AMDGPU supports attaching virtual displays. To quote the documentation here:

"virtual_display (charp)

Set to enable virtual display feature. This feature provides a virtual display hardware on headless boards or in virtualized environments. It will be set like xxxx:xx:xx.x,x;xxxx:xx:xx.x,x. It’s the pci address of the device, plus the number of crtcs to expose. E.g., 0000:26:00.0,4 would enable 4 virtual crtcs on the pci device at 26:00.0. The default is NULL."

How do we get that PCIE ID? Run this, and look for a VGA device, that will give you part of the ID code.

lspci -v

Confirm the code using

ls -la /sys/bus/pci/devices

From there you should be able to see the full PCIE ID code to add this virtual screen to. So change the boot options from before to match the code you find here.

    services.displayManager.autoLogin = {
      enable = true;
      user = "testuser";
    };

This is also required, as you need an active Xorg session in order for VirtualGL to be able to utilize the GPU properly and leverage it’s background processes. If no user is signed in, and they’re still on the login screen, VirtualGL cannot utilize the GPU yet. So enabled auto-login for any existing user is enough to be able to allow VirtualGL to work. This was inspired partly by How to run X + Sunshine + Steam on headless server - #6 by Crafteo thank you @Crafteo. As they had gotten a similiar solution leveraging sunshine/moonlight for the headless display setup. Unfortunately my remote clients do not have hardware rendering, so I cannot use the moonlight client for my remote users. Where as VNC clients work just fine, so pulling together the pieces from their work was extremely helpful, as that is where the auto-login realization was made.

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

This just enabled OpenGL to work at all, and again if you are not running 32-bit programs, that part can be left out.

    networking.firewall.allowedTCPPorts = [ 5901 ];

Open the network port for the remote client, change this to whatever you need.

Now finally, to actually run something after all this setup. Once you have rebooted, in order to set the new environment variables correctly and update the boot parameters for the virtual screen, then run

vncserver -fg -geometry 800x600 -securitytypes none

This is an insecure means of doing this, but for testing purposes this will allow you to confirm if this works. Then, from a remote machine, open the VNC session, and a terminal. From there run

vglrun glxinfo | grep OpenGL

You should see your actual GPU parameters listed here, and not llvmpipe anymore. If you are trying to run a 32-bit program with wine you need to explicitly declare

WINEARCH=win32 vglrun wine $programpath

or else wine will not find your GPU correctly.

To anyone out there trying to do this, I hope this was helpful to you. As there is nothing worse than digging for something extremely specific, and finding a forum post from 10 years ago that says “I fixed it.” without any solution added.

1 Like