Get qemu guest integration when running nixos-rebuild build-vm

For some reasons I can’t manage to configure qemu to share the clipboard when I run a system into a vm with

$ nixos-rebuild build-vm --flake .#demo

Even worse, I get an error (I made a minimal example on this gist):

$ nixos-rebuild build-vm --flake 'git+https://gist.github.com/tobiasBora/c9ec77de0c3c50d433707d3323720cd6?ref=main#demo' --refresh
building the system configuration...
error: The option `virtualisation.qemu' does not exist. Definition values:
       - In `/nix/store/rz4aglicqrjcfzq2a78s3fn8fp2gwh9w-source/flake.nix':
           {
             guestAgent = {
               enable = true;
             };
           }

my flake basically only contains:

                # Trying to get copy/paste working
                virtualisation.qemu.guestAgent.enable = true;
                services.qemuGuest.enable = true;
                services.spice-vdagentd.enable = true;
                # GUI
                services.xserver.enable = true;
                services.xserver.libinput = {
                  enable = true;
                };
                services.xserver.displayManager.sddm.enable = true;
                services.xserver.desktopManager.plasma5.enable = true;

                system.stateVersion = "22.11"; # Did you read the comment?

Ok, I finally found the answer… So it is a bit more complicated than expected: to get the configuration.qemu function I need to use:

                imports = [
                  # We need to import that to make it work.
                  "${nixpkgs}/nixos/modules/virtualisation/qemu-vm.nix"
                ];

and then we need to configure qemu to start a spice server. With spice qemu is not anymore in charge of displaying the screen but it is delegated to a client on the host. Qemu is just a gateway between vdagent in the host and the spice client. Note that qemu will open a socket (and will appear frozen since it is just a server) and the client will connect to it. To start it automatically see the dirty trick below. You can use different clients (see Aarch linux page).

So the configuration looks like

                services.spice-vdagentd.enable = true;
                virtualisation.qemu.options = [
                  "-vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent"
                ];

And after you start the vm (qemu will hang), you need to start: remote-viewer spice://127.0.0.1:5930 and it will start to display the vm.

I also created a script to automatically call the script (you also have qemu -display spice-app but it is harder to pack it into a derivation since it relies on mime) but I need to create a new derivation for that. The whole flake is now:

{
  description = "Configuration";

  # To easily generate a derivation per architecture
  inputs.flake-utils.url = "github:numtide/flake-utils";
  
  outputs = { self, nixpkgs, flake-utils }: flake-utils.lib.eachDefaultSystem (system:
    let
      pkgs = nixpkgs.legacyPackages.${system};
    in
      {

        packages.nixosConfigurations.demo = nixpkgs.lib.nixosSystem {
          system = system; # flake needs to know the architecture of the OS
          
          modules = [
            (
              {config, lib, pkgs, ...}:
              {
                imports = [
                  # We need to import that to make it work.
                  "${nixpkgs}/nixos/modules/virtualisation/qemu-vm.nix"
                ];
                # Trying to get copy/paste working
                # virtualisation.qemu.guestAgent.enable = true;
                # services.qemuGuest.enable = true;
                ## With spice qemu is not anymore in charge of displaying the screen but it is delegated
                ## to a client on the host. Qemu is just a gateway between vdagent in the host and the spice client.
                ## Note that qemu will open a socket (and will appear frozen since it is
                ## just a server) and the client will connect to it.
                ## To start it automatically see the dirty trick below
                ## You can use different clients (see arch linux page).
                services.spice-vdagentd.enable = true;
                virtualisation.qemu.options = [
                  "-vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent"
                ];
                
                # GUI
                services.xserver = {
                  enable = true;
                  desktopManager.xfce = {
                    enable = true;
                    enableScreensaver = false;
                  };
                  displayManager = {
                    defaultSession = "xfce";
                    autoLogin = {
                      enable = true;
                      user = "me";
                    };
                  };
                };

                users.users.me = {
                  isNormalUser = true;
                  # In theory one should use hashedPassword but who care, the password is public anyway
                  password = "me";
                  extraGroups = [ "wheel"  ]; # Enable ‘sudo’ for the user.
                };

                system.stateVersion = "22.11"; # Did you read the comment?
              }
            )
          ];
        };

        ## Build it with "nix build .#demo-with-automatic-vm-integration" and run the script in `result/bin/run-nixos-vm`.
        packages.demo-with-automatic-vm-integration = pkgs.writeShellApplication {
          name = "run-nixos-vm";
          runtimeInputs = [ pkgs.virt-viewer ];
          text = ''
            ${self.packages.${system}.nixosConfigurations.demo.config.system.build.vm}/bin/run-nixos-vm & PID_QEMU="$!"
            sleep 1 # I think some tools have an option to wait like -w
            remote-viewer spice://127.0.0.1:5930
            kill $PID_QEMU
          '';
        };

      });
}

and the above gist contains the latest version. To compile the VM with autostart of the client:

$ nix build .#demo-with-automatic-vm-integration
$ ./result/bin/run-nixos-vm 

I would love to see directly an option in nixpkgs, this is not really an “easy” approach…

Or with the gist

$ nix build 'git+https://gist.github.com/tobiasBora/c9ec77de0c3c50d433707d3323720cd6?ref=main#demo-with-automatic-vm-integration' --refresh
$ ./result/bin/run-nixos-vm  

Yes, this module is not in the default includes because it has no enable option and is incompatible with non-VM configurations.
We could probably define a boot.isVirtualMachine, similarly to boot.isContainer and gate the module behind that.

These options are somewhat arbitrary, so I don’t think it’s possible to cover most use cases with a single option. For example I don’t use QXL but I enable the WebDAV file sharing.

The problem is that QEMU flags are insanely low-level and not meant to be typed in interactively. In this module I basically had to write a high-level friendlier interface (sharedDirectories, forwardPorts, resolution, …) for QEMU, nice to use but the complexity is already seriously high (> 1000 lines of code with 100 just for starting qemu).

What would you suggest?

Yes, this module is not in the default includes because it has no enable option and is incompatible with non-VM configurations.

Any idea why I can’t load it using imports = [ "${inputs.nixpkgs}/nixos/modules/virtualisation/qemu-vm.nix" ]; in a module instead? It fails with infinite recursion. Because in the module argument of nixpkgs.lib.nixosSystem I cannot really do any sort of conditional import like modules = [ … ] + nixpkgs.lib.optional (config ? virtualisation.qemu) "${nixpkgs}/nixos/modules/virtualisation/qemu-vm.nix". I would love to know how to enable this conditionally so that I don’t need to write two derivations for qemu and non-qemu VM.

We could probably define a boot.isVirtualMachine , similarly to boot.isContainer and gate the module behind that.

Yes, it would be great to have a way to check if we are being running a VM machine or not. Like for now I test if I’m building a VM using

boot.initrd.kernelModules = lib.mkIf (!(config ? virtualisation.qemu)) [ "vc4" "bcm2835_dma" "i2c_bcm2835" ];

but to be honest I’m not even sure to understand why it works since it seems like this option is set by the module /nixos/modules/virtualisation/qemu-vm.nix, which is not loaded by default even when building a VM (things seems to be a bit different for nixos tests that do load this module by default). Or is it because nixos/modules/virtualisation/qemu-vm.nix is not imported as a module but is still called (also here) from nixos-rebuild switch which is enough to set this variable? The really weird thing is that I am allowed to read the variable but not to set it… I guess that it’s what happens if you call a module file but don’t import it explicitely… In any case there is a bit of dark magic I don’t understand here.

So for use-cases where I want to compile slightly different configs for the system and for the build-vm (for instance to enable spice agents), how can I do that? Ideally I would prefer to avoid redefining multiple configurations since I would like to only change the flag switch/build-vm and not the name of the derivation for simplicity (and having thousands different names is not extremely practical to maintain).

These options are somewhat arbitrary, so I don’t think it’s possible to cover most use cases with a single option. For example I don’t use QXL but I enable the WebDAV file sharing.

Well they are the options given by the documentation (I just want to change them to pick a new available unix socket at run-time). If someone prefers their own options, we should provide an interface for that as well but it would be good to have decent default values (I spent a few hours to understand what I was supposed to do to make it work). Actually, if the options are potentially different between each user, an option may be to add parameters or an environment variable to the run-nixos-vm script that overwrite the default spice arguments. In any case it would be better than what we have so far.

What would you suggest?

Hum I’m not an expert but I would say that we could keep "${nixpkgs}/nixos/modules/virtualisation/qemu-vm.nix" for backward compatibility and create a new module (loaded) that you can enable/disable with an option. Then, I would say that the easier would be to change build.vm so that instead of calling toplevel it uses a different derivation that explicitely say that we are in a VM so that we can easily conditionally enable qemu modules. For the spice options, I would do something as explained above, where I give some default basic options to the user that could be manually changed even after the build using some parameters/environment variable. To keep both reproducibility and customization options, by default it can use the above options, and if the user uses result/bin/run-nixos-vm --custom-qemu-integration then it would look into some variables like NIX_QEMU_DEFAULT_OPTIONS and NIX_QEMU_VIEWER. A pseudo code could look like

__NIX_RUN_VM_OPTIONS = "-vga qxl -spice port=…"
if custom-qemu-integration:
  if `NIX_RUN_VM_OPTIONS`:
    __NIX_RUN_VM_OPTIONS = NIX_RUN_VM_OPTIONS
same for the value __NIX_QEMU_VIEWER with default to something like `remote-viewer spice://127.0.0.1:5930`

qemu … all default options __NIX_RUN_VM_OPTIONS
__NIX_QEMU_VIEWER

Did you figure this out?

I ended up writting a function that outputs different configurations: https://github.com/cwi-foosball/foosball/blob/3f68fa2da74b07e756f86e689216dd627d66e065/lib/mkNixosConfigAndIntegratedVm.nix#L2-L79, and the function takes as argument the different modules to load in which case.

Try

{ modulesPath, pkgs, ... }:

{
    imports = [ (modulesPath + "/virtualisation/qemu-vm.nix") ];
}
1 Like