Looking for help setting up VirtualGL with TurboVNC

I’m trying to create virtual desktop workspaces for a small number of users, but the applications being used need opengl hardware acceleration to not be terribly slow. I haven’t had any luck trying to get this to work on a NixOS system, so any help would be appreciated. This is being done an an AMD GPU, so there shouldn’t be any Nvidia driver issues here.

The relevant parts, I think, in my configuration.nix are:

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
    ];

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

    services.xserver.enable = true;

    services.xserver.displayManager.lightdm.enable = true;
    services.xserver.desktopManager.xfce.enable = true;

    networking.firewall.allowedTCPPorts = [ 5901 ];

I’m testing to make a workspace with:
vncserver -fg -geometry 800x600 -vgl -securitytypes none
output:

Desktop 'TurboVNC: hostname:1 (username)' started on display username:1

Starting applications specified in /nix/store/ag2k6m8dr633i0dhxl02810j3pqizzqx-turbovnc-3.1/bin/xstartup.turbovnc
(Enabling VirtualGL)
Log file is /home/username/.vnc/username:1.log

Which I can successfully log into to test, but there’s no actual hardware acceleration, as when I run glxinfo I get this.

vglrun glxinfo
name of display: :1
Authorization required, but no authorization protocol specified

[VGL] ERROR: Could not open display :0.

Trying to run an opengl app also has it running with llvmpipe instead of my GPU.
I can’t find any option settings in the options search for turbovnc or virtualgl so I can’t tell if its even enabled. Is anyone able to test this, or able to point me in the direction of some documentation on what else I am missing? Maybe an option named something special?

Thank you, please let me know if any extra logs are need, or what else I can provide to help.

EDIT
If I just run glxinfo | grep OpenGL without the vglrun part from within the VNC context I get this output:

glxinfo | grep OpenGL
OpenGL vendor string: Mesa
OpenGL renderer string: llvmpipe (LLVM 17.0.6, 256 bits)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 24.0.7
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.5 (Compatibility Profile) Mesa 24.0.7
OpenGL shading language version string: 4.50
OpenGL context flags: (none)
OpenGL profile mask: compatibility profile
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 24.0.7
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions

Here it shows the llvmpipe as the renderer, but how can I make it use the hardware renderer?

In case anyone finds this thread in the future, I’ve gotten this to partially work.

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
    ];

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

    services.xserver.enable = true;

    services.xserver.displayManager.lightdm.enable = true;
    services.xserver.desktopManager.xfce.enable = true;

    environment.sessionVariables = {
      LD_LIBRARY_PATH="/run/opengl-driver/lib/:${pkgs.virtualglLib}/lib";
    };

    networking.firewall.allowedTCPPorts = [ 5901 ];

With this configuration.nix addition, and then running this command while on a machine with a logged in user with an x11 session:
vncserver -fg -geometry 800x600 -vgl -securitytypes none

The VNC session context will have virtualGL active, and working when I connect from another machine. However, this still doesn’t work for a headless server, which is still my original goal, but there is some progress. If anyone else knows more about these packages I’d love some guidance on how to get this working for a headless server.

Well, almost 2 months later of working on this I have found a solution. So anyone else out there reading this, this has been a rabbit hole. I even attempted to hire a developer twice to try and help solve this issue, but in both cases they gave up after a few days of trying themselves.

Here in the configuration.nix solution:

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
      pkgsi686Linux.virtualgl

    ];

    environment.sessionVariables = {
      LD_LIBRARY_PATH="/run/opengl-driver/lib/:/run/opengl-driver-32/lib:${pkgs.virtualglLib}/lib:${pkgs.pkgsi686Linux.virtualglLib}/lib";
    };

    boot.extraModprobeConfig = ''
      options amdgpu virtual_display=0000:01:00.0,1
    '';

    services.displayManager.autoLogin = {
      enable = true;
      user = "testuser";
    };

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

    networking.firewall.allowedTCPPorts = [ 5901 ];

Now to go over what this is doing, and to reference the sources that finally got me to the solution.

    environment.systemPackages = with pkgs; [
      turbovnc
      virtualgl
      pkgsi686Linux.virtualgl
    ];

Mostly self explanatory, but notice the “pkgsi686Linux.virtualgl” package, as that was required in order to run any 32bit programs while using this VirtualGL system. If you are only running programs the same architecture as your system, you can leave that out.

    environment.sessionVariables = {
      LD_LIBRARY_PATH="/run/opengl-driver/lib/:/run/opengl-driver-32/lib:${pkgs.virtualglLib}/lib:${pkgs.pkgsi686Linux.virtualglLib}/lib";
    };

This part is required so that VirtualGL can find the libraries correctly. Without this, vglrun will fail immediately complaining about being unable to find the LD_LIBRARY / LD_LIBRARY_PATH properly.

    boot.extraModprobeConfig = ''
      options amdgpu virtual_display=0000:01:00.0,1
    '';

Now this was the biggest hurdle. Since I need this to work on a headless system, I can’t just plug in a dummy plug to a computer I don’t have access to. I had attempted to go down a significantly more complex path of creating a virtual screen using Xoth configs, but the room for error was extremely high and I did not achieve any success blindly trying to get it to work, as nearly all examples I searched through were for Nvidia GPUs, whose configs obviously wouldn’t work for me without significant changes, but I’m not at all familiar with Xorg configs to know how to change them to work. After weeks of seraching I managed to find this blog post: Setting up GPU accelerated remote desktop with sound on headless Linux using NoMachine – mightynotes Which wasn’t as helpful as I hoped, however there is an extremely enlightening note half way rhrough. To quote it:

“If you’re using an AMD GPU with AMDGPU driver, consider yourself lucky. AMDGPU supports virtual display out of the box. Edit /etc/modprobe.d/amdgpu.conf and add the line options amdgpu virtual_display=xxxx:xx:xx.x,y to it. Where xxxx:xx:xx.x is the PCIe device ID (the thing you get from lspci) and y is the screen ID (starting from 1) attached to the card.”

There is the key piece to all of this, from the link they provided the AMDGPU supports attaching virtual displays. To quote the documentation here:

"virtual_display (charp)

Set to enable virtual display feature. This feature provides a virtual display hardware on headless boards or in virtualized environments. It will be set like xxxx:xx:xx.x,x;xxxx:xx:xx.x,x. It’s the pci address of the device, plus the number of crtcs to expose. E.g., 0000:26:00.0,4 would enable 4 virtual crtcs on the pci device at 26:00.0. The default is NULL."

How do we get that PCIE ID? Run this, and look for a VGA device, that will give you part of the ID code.

lspci -v

Confirm the code using

ls -la /sys/bus/pci/devices

From there you should be able to see the full PCIE ID code to add this virtual screen to. So change the boot options from before to match the code you find here.

    services.displayManager.autoLogin = {
      enable = true;
      user = "testuser";
    };

This is also required, as you need an active Xorg session in order for VirtualGL to be able to utilize the GPU properly and leverage it’s background processes. If no user is signed in, and they’re still on the login screen, VirtualGL cannot utilize the GPU yet. So enabled auto-login for any existing user is enough to be able to allow VirtualGL to work. This was inspired partly by How to run X + Sunshine + Steam on headless server - #6 by Crafteo thank you @Crafteo. As they had gotten a similiar solution leveraging sunshine/moonlight for the headless display setup. Unfortunately my remote clients do not have hardware rendering, so I cannot use the moonlight client for my remote users. Where as VNC clients work just fine, so pulling together the pieces from their work was extremely helpful, as that is where the auto-login realization was made.

    hardware.opengl.enable = true;
    hardware.opengl.driSupport32Bit = true;

This just enabled OpenGL to work at all, and again if you are not running 32-bit programs, that part can be left out.

    networking.firewall.allowedTCPPorts = [ 5901 ];

Open the network port for the remote client, change this to whatever you need.

Now finally, to actually run something after all this setup. Once you have rebooted, in order to set the new environment variables correctly and update the boot parameters for the virtual screen, then run

vncserver -fg -geometry 800x600 -securitytypes none

This is an insecure means of doing this, but for testing purposes this will allow you to confirm if this works. Then, from a remote machine, open the VNC session, and a terminal. From there run

vglrun glxinfo | grep OpenGL

You should see your actual GPU parameters listed here, and not llvmpipe anymore. If you are trying to run a 32-bit program with wine you need to explicitly declare

WINEARCH=win32 vglrun wine $programpath

or else wine will not find your GPU correctly.

To anyone out there trying to do this, I hope this was helpful to you. As there is nothing worse than digging for something extremely specific, and finding a forum post from 10 years ago that says “I fixed it.” without any solution added.

1 Like