Libvirt: installing qemu hook

I have a qemu-hook for libvirt I wish to install. On Arch Linux, this file is written as /etc/libvirt/hooks/qemu. There doesn’t seem to be an option relating to hooks in the libvirt Nix module, so I figure I must somehow inject the file into the derivation? I’m unsure how I would do that, and request help.

Cheers.

This seems to be the case; I tried the following:

environment.etc."libvirt/hooks/qemu" = {
  source = pkgs.writeText "qemu" ''
    #!/usr/bin/env bash

    ${pkgs.libnotify}/bin/notify-send "It works!"
  '';

  mode = "0755"; # make it executable and world (?) readable
};

I get a notification when I run the script directly, but not when I start a Qemu-VM through libvirt (virsh or virt-manager).

maybe you should add an option to the module to provide that. You can increase verbosity and look where libvirt looks for the hooks, it might be a path in the store ?!

Sounds like a plan. I’ll probably look into this during the weekend.

According to libvirt documentation, hooks are located in the directory $SYSCONFDIR/libvirt/hooks/. On NixOS, this is /var/lib/libvirt/hooks/.

Note that while hooks can be installed in this directory, there is no shell in PATH while running it. I had to supply #!/run/current-system/sw/bin/bash at the top of my script.

I’m somewhat confused.

To remedy the lack of bash in the systemd unit PATH, I can add systemd.services.libvirtd.path = [ pkgs.bash ]; to configuration.nix, but if I add it to a custom module, which is both imported and enabled, it does not work. It builds, but bash is not available in the PATH, looking at the output of systemctl cat libvirtd. Am I somehow applying it incorrectly in the module down below?

{ config, pkgs, lib, ... }:

with lib; let cfg = config.pciPassthrough;
  edge = import (fetchTarball https://github.com/NixOS/nixpkgs/archive/master.tar.gz) { };

  cpuset = pkgs.python2Packages.buildPythonApplication rec {
    name = "cpuset-${version}";
    version = "1.5.7";

    src = pkgs.fetchurl {
      url = "https://github.com/lpechacek/cpuset/archive/v1.5.7.tar.gz";
      sha256 = "32334e164415ed5aec83c5ffc3dc01c418406eb02d96d881fdfd495587ff0c01";
    };
  };
in
{
  ###### libvirt qemu hook
  # options.virtualisation.libvirtd.qemuHook = mkOption {
  #   type = types.lines;
  #   default = ''
  #     touch /tmp/qemu-hook-works
  #   '';
  #   description = ''
  #     Contents written to the QEmu hook file.
  #   '';
  # };

  ###### interface
  options.pciPassthrough = {
    enable = mkEnableOption "PCI Passthrough";

    cpuType = mkOption {
      description = "One of `intel` or `amd`";
      default = "intel";
      type = types.str;
    };

    pciIDs = mkOption {
      description = "Comma-separated list of PCI IDs to pass-through";
      type = types.str;
    };

    libvirtUsers = mkOption {
      description = "Extra users to add to libvirtd (root is already included)";
      type = types.listOf types.str;
      default = [];
    };
  };

  ###### implementation
  config = (mkIf cfg.enable {

    boot.kernelParams = [
      "pcie_acs_override=downstream"
      "${cfg.cpuType}_iommu=on"
      "kvm.ignore_msrs=1"
    ];

    # These modules are required for PCI passthrough, and must come before early modesetting stuff
    boot.kernelModules = [ "vfio" "vfio_iommu_type1" "vfio_pci" "vfio_virqfd" ];

    boot.extraModprobeConfig ="options vfio-pci ids=${cfg.pciIDs}";

    environment.systemPackages = with pkgs; [
      virtmanager
      qemu
      OVMF
      pciutils
    ];

    virtualisation.libvirtd.enable = true;
    # virtualisation.libvirtd.qemuPackage = pkgs.qemu_kvm;
    virtualisation.libvirtd.qemuPackage = ((edge.qemu.overrideAttrs (old: {
      # Some yet-to-be-merged audio fixes. Highly recommended.
      # src = pkgs.fetchurl {
      #   url = "https://github.com/qemu/qemu/archive/v2.12.0.tar.gz";
      #   sha256 = "c724d5744fc4e4d5fdc0d8cd1860d43280850a84d2a450a18515c14efe6c1a99";
      # };

      src = ./qemu-audio-patched.tar.gz;
      patches = [ ./no-etc-install.patch ];

      # src = pkgs.fetchFromGitHub {
      #   owner = "spheenik";
      #   repo = "qemu";
      #   rev = "5670eb16efcee3e5b214227810a3e42ca06e28d3";
      #   sha256 = "1cqixbfsbsif4i22bf3lvcmykg08ksrpy25a7cn4c8jmc8hyxwx4";
      # };

    })).override {
      hostCpuOnly = true;
    });

    users.groups.libvirtd.members = [ "root" ] ++ cfg.libvirtUsers;

    # Add bash and other utilities to PATH, so that the libvirt hooks can be executed.
    systemd.services.libvirtd.path = [ pkgs.bash cpuset ];

    virtualisation.libvirtd.qemuVerbatimConfig = ''
      user = "tmplt"
      nvram = [
      "${pkgs.OVMF}/FV/OVMF.fd:${pkgs.OVMF}/FV/OVMF_VARS.fd"
      ]
    '';
  });

}

Until I (or someone else) properly adds this to the libvirt module, here’s how I’ve temporarily fixed it:

systemd.services.libvirtd.preStart = ''
    mkdir -p /var/lib/libvirt/hooks
    chmod 755 /var/lib/libvirt/hooks

    # Copy hook files
    cp -f ${qemuHookFile} /var/lib/libvirt/hooks/qemu

    # Make them executable
    chmod +x /var/lib/libvirt/hooks/qemu
'';

All programs that the hooks uses (shell included) must be added to the systemd unit’s PATH. Append these packages to systemd.services.libvirtd.path. If you end up running out of PATH space, you can put all packages you need in a pkgs.BuildEnv.

Hooks are run as root, by default.

Info on hooks at: libvirt: Hooks for specific system management. I use them to dynamically shield a set of CPU cores for exclusive use by a GPU Passthrough VM.

1 Like

seems like it has not been done yet. thanks for the workaround!

EDIT: I opened PR based on the proposed solution libvirtd: added QEMU hook option by anpin · Pull Request #247679 · NixOS/nixpkgs · GitHub