I am trying to set up legacy drivers for my GeForce GTX 750 Ti (yes, I know it’s ancient ;_;). The legacy drivers included in nixpkgs don’t seem to support this model, so I was wondering how I might package the drivers I need myself? I am new to overlays and kernel modules or anything like that, so this is somewhat beyond my current capability but I am open to a challenge. I tried to find documentation on this but was unable to find anything.
I found a driver on NVIDIA’s website which is allegedly compatible with my model, but this appears to be an installation script. I suppose my initial question would be, how would I normally package a graphics driver with nix, and how does this installation script fit into / compare with this process?
I am vaguely familiar with the following concepts:
Linux kernel modules are things that exist to allow drivers to talk to the kernel. These need to be compiled into the kernel? (My knowledge of ABI’s, dynamic linked libraries, etc. is surface level at best)
Overlays are used to add custom packages to nixpkgs (I have packaged the GNU hello, but not yet added a custom package to an overlay)
Nix flakes can be used to pass version locked build inputs via a conventional interface with convenience commands.
I believe the beta drivers have dropped support; whenever the 590 drivers become the new stable you’ll have to switch to the as-yet-nonexistent legacy580 driver, but for now you should be able to use the nvidia module out of the box.
I’d recommend this config:
# nvidia.nix
#
# *Specifically* for a GTX 750 TI in a system *without* an iGPU.
{
services.xserver.videoDrivers = [ "nvidia" ];
hardware.nvidia = {
open = true;
powerManagement.enable = true;
};
}
If you do have an iGPU, things look a bit different. Don’t read the wiki, it’s worthless and my attempts to fix the nvidia page are constantly thwarted.
The “normal” way would be to submit a PR against the official Linux kernel. You’re asking how to do kernel development, which is not exactly trivial.
Nvidia want special treatment, in any case, and develop their kernel module out-of-tree. This is not “normal”.
The installation script should practically never be used, ignore its existence; even their documentation says so.
It’s pretty clear that your knowledge is surface-level
Linux kernel modules are efffectively chunks of kernel code that can either be compiled into the kernel proper, or loaded at a later stage as an object file. The kernel docs can tell you all you need. This has nothing to do with dynamic linking; as with everything kernel-related, these modules must be statically compiled - dynamic linking doesn’t work if there is no kernel to do it.
This isn’t why kernel modules are a topic in this context, though; they’re mostly intended as a way to optimize boot times.
However, since the Linux kernel is LGPL licensed, if you want to write a driver for your hardware and “properly” add it to the kernel, you have to submit a PR and open source your code. Many hardware vendors don’t like this, and dodge that by instead writing a closed-source kernel module, only releasing the object file, and forcing you to load it. This is possible since the kernel is LGPL rather than GPL licensed, which means that interfacing with some kernel APIs is permitted in this context (though often hardware vendors - including nvidia - break the license anyway).
Historically, this is what nvidia did. They’ve recently started releasing their module as an open source module (this is what the open = true thing is about), but the module still lives outside of the kernel source tree. There are also some other, FOSS, out-of-tree modules (such as zfs).
In either case, modules from third party trees need to be effectively included in the kernel source code and compiled alongside it.
Note that this is actually pretty problematic. Kernel modules can do anything kernel code can; they are simply part of the kernel. What’s worse, since the nvidia module is out-of-tree, there’s no way to guarantee that compiling it into a Linux kernel it wasn’t developed for doesn’t set your GPU on fire.
This is why using linux_latest is very stupid if you’re using nvidia, but many people don’t understand any of this and naïvely go “newer is better”. Luckily the result is usually compilation failure, and not fire.
No, actually. Overlays are used to change nixpkgs. You can use them to add packages, since that’s a totally valid change, but honestly this is stupid and people mostly do it because they don’t understand what overlays do, and if all you have is a hammer everything looks like a nail…
My point is, no, you don’t need to add a package to a nixpkgs instance to use it, this is not the purpose of overlays.
Yeah, but this is pretty irrelevant in this context. Flakes do a lot more than this; I’d recommend usimg other mechanisms (e.g. simple fetch* calls) until you have a good reason to buy into the whole flake ecosystem.
Wow… thank you for that in-depth explanation! This is incredibly helpful and it’s clear that I’ve got a lot of reading to do
I’ve had a read through your other post and that definitely feels like the experience I’ve had with the current state of NixOS documentation. I’ve been trying to get into NixOS for about a year now, but I often find myself running into undocumented options and no clear guidance most of the time, which makes it pretty hard to commit. What does exist seems to be scattered and buried across a myriad of unofficial docs, tutorials, and wikis with no clear thread to tie everything together. I’m currently trying to recruit people for a rewrite of the QEMU docs for similar reasons, as I have a hypothesis that this is a very powerful use case for NixOS which is currently untapped.
I’ll try your recommended config and see how that goes!