How to use zram as tmpfs on NixOS

Hello everyone!

I was wondering is there a way to config zram instead of a tmpfs device (as a file system, not just as swap) on NixOS, I saw this post and it seemed fairly easy, but surely NixOS is no other distro and reproducibility options as far as I see (and even the zram-init package) are not available

I would appreciate anyone’s help on how to do it


I’d do something simpler like

  1. mounting a tmpfs somewhere
  2. use GitHub - FS-make-simple/fusecompress: Transparent compression FUSE filesystem on top of it

so the underlying FS of fusecompress is on ram, but things stored in it are compressed.

1 Like

It is nicer to let the kernel do that directly, though, why involve userspace if the kernel can already do it out of the box?

I think the zramSwap module is a bit misnamed, it looks like it will allow creating zram devices without assigning them to swap by default: zramSwap.numDevices

Flipping on that module, and then configuring fileSystems to mount the created swap device(s) as actual file systems would probably work with some tinkering.


sounds like you need ZFS (or BTRFS, or any filesystem with builtin compression) :sweat_smile:

As for the actual question yes, you can use the zramSwap option to make multiple zram devices, and use them for whatever you want, not just swap.

nope because if you create a raw disk in a tmpfs directory, the underlying filesystem like BTRFS or ZFS won’t reclaim free blocks when deleting files :upside_down_face: (or maybe not easily)

1 Like

I saw the same and suspected that! But still it’s not clear for me how it can be used, I anyway give it a closer look so I may be able to do sth for it, I’ll try and report the result.

But I think on a higher level, this needs to be a Github issue, because neither this nor @Solene solutions are not real solution of matter, rather hacks to gain sorta the final effect of what I want (Its either bad naming and/or not working with your way or totally doing sth else for sake of nix limits in Solene’s way). I think there should be a better way for this! If anyone can, please open an issue on Github about it because I may not be the best person to open any issue more serious than package request, I would surely come and upvote the issue!

I assume its not going to be a hard fix

Also I think actually zram should be easier than, mounting tmpfs and using fuse, and would decrease the chances of messing things up with more system-wide solution, rather than the goal of fuse which is a FS on userspace and even with all those aside, I cant find any package nor option to deploy the mentioned Github repo in a nixy way so, back to question of how should I even do it in a reproducible manner without hustle of packaging it? zram was fundamental enough to have a place in Linux kernel, why this feature shouldn’t be included with full glory on nix (e.g.: per device limit for zram, a more clear way to use it as fs, …)? I think one of the best points of it is that the choices are not just limited to tmpfs and btrfs, zfs, reiserfs, … can be used like a normal storage on ram, which sounds very interesting (and maybe some random problems, but most of reviews I heard of zram were positive compared to tmpfs), and there are many optimization being done on them which including them by fuse layers one by one may not be the best idea (still, I’m absolutely not an expert on this matter, correct me if I’m wrong on any point)

Do you mean using zram, with a filesystem e.g.: BTRFS, ram wouldn’t be immediately freed up after deleting a file?

I’ve decided to try to adapt code from zram module to allow usage of these devices as not just swap, but to mount them. Here’s a flake that worked for me with just nix run (I’m running on M1 Mac, adjust systems accordingly):

  inputs = {
    nixosvm.url = "github:YorikSar/nixos-vm-on-macos";
    nixosvm.inputs.nixpkgs.follows = "nixpkgs";
  outputs = { self, nixpkgs, nixosvm }: {
    nixosModules.zramMount = { config, pkgs, ... }: {

      # Copied from <nixpkgs>/nixos/modules/config/zram.nix
      system.requiredKernelConfig = with config.lib.kernelConfig; [
        (isModule "ZRAM")

      boot.extraModprobeConfig = ''
        options zram num_devices=1
      boot.kernelModules = [ "zram" ];

      services.udev.extraRules = ''
        KERNEL=="zram[0-9]*", ENV{SYSTEMD_WANTS}="zram-init-%k.service", TAG+="systemd"

      # Adjusted to not format as swap from zram module"zram-init-zram0" = {
        after = [ "dev-zram0.device" ];
        requires = [ "dev-zram0.device" ];
        before = [ "mkfs-dev-zram0.service" ];
        requiredBy = [ "mkfs-dev-zram0.service" ];
        unitConfig.DefaultDependencies = false; # needed to prevent a cycle
        serviceConfig = {
          Type = "oneshot";
          RemainAfterExit = true;
          ExecStop = "${pkgs.runtimeShell} -c 'echo 1 > /sys/class/block/zram0/reset'";
        script = ''
          ${pkgs.util-linux}/sbin/zramctl --size 1G --algorithm zstd /dev/zram0
        restartIfChanged = false;

      # Format and mount as /tmp in VM
      virtualisation.fileSystems."/tmp" = {
        fsType = "btrfs";
        device = "/dev/zram0";
        autoFormat = true;
    nixosModules.aarch64VM = { = nixpkgs.legacyPackages.aarch64-darwin;
    nixosConfigurations.zram = nixpkgs.lib.nixosSystem {
      system = "aarch64-linux";
      modules = [
    packages.aarch64-darwin.default =;

zram module documentation suggests that it should provide clean zram devices not just for swap, but it doesn’t.

1 Like

What are you trying to achieve?

Tmpfs gets swapped out if there’s pressure and if you’ve got ZRAM swap, it’ll be swapped out to zram.

It totally does, at least in nixos unstable. I just made one the other day. There is the zramSwap.numDevices set how many of these devices you want to create, and zramSwap.swapDevices to set how many should be used as swap. If the former is greater than the later, then you will have extra zram devices to use for whatever you want (e.g. format as a particular filesystem and mount it, if you so desire).

1 Like

@nrdxp How may I use it as filesystem? I don’t know what went wrong, maybe my fault but system panicked when I used zram1 with nix filesystem api, may you show a snippet which works? (e.g. making a fat32 on zram)

Also, How can we set two diff limits per zram devices? I think this is my core problem with the current NixOS API and lack of name clarity

But I approve, using method mentioned by @TLATER / @nrdxp “it is” possible to make another extra zram dev

When I try replacing zramMount module in my example above with this:

    nixosModules.zramMount' = {
      zramSwap = {
        enable = true;
        numDevices = 1;
        swapDevices = 0;

      # Format and mount as /tmp in VM
      virtualisation.fileSystems."/tmp" = {
        fsType = "btrfs";
        device = "/dev/zram0";
        autoFormat = true;

the VM fails to boot, because zram module doesn’t create zram-init-zram0.service unit that is called from udev, and zram kernel module is not loaded, because zram-reloader is only triggered by sram-init services. It also only calls zramctl together with mkswap on that device, so you can only configure devices that you want to become a swap.

If you run:

❯ zramctl /dev/zram1
/dev/zram1 lzo-rle         0B   0B    0B    0B      12

We can see that the problem is that the reported disk size is 0 just after creation of the device, so you can’t write a partition table or format the drive. The trick is to $ sudo zramctl -s 2G /dev/zram1 or similar to give it a size so that formatting programs can work. From there its just like mounting any other filesystem.

FWIW, I agree the interface is awkward, and there should probably just be a zram module with zram.swapDrive as an option. Also, it may be worth it to give the disk some minimal size by default.

1 Like

also as far as I saw, the default installer of NixOS always prefer to call them by UUID, how can I get UUID of a zram dev if possible.

also, after that 2GB trick, Sth like this should work then? or I’m making another bad mistake?

fileSystems."/vol" = {
  device = "/dev/zram1";
  fsType = "btrfs";
  options = [ "defaults" "size=2G" "mode=755" ];

(It’s basically my tmpfs code just changed the device from none to zram and fstype to btrfs from tmpfs)
Also just wondering, can I automate that command run with a flake or its better that I simply use a post-run script?

Can you share zramctl output?

With zram module not loaded, it’s empty. You can check it out in a VM yourself using the flake that I posted above.

During checking that I found out just another flaw of NixOS zramSwap module

Even in output of @nrdxp we can see another big problem of not making a device base interface!

not only I have to explicitly invoke to set size as also mentioned by @nrdxp, but even the compression mechanism is diff from the nix default and the extra devices defaults (lzo-rle for extra devs and zstd for nix default, and I see no way to change and they are inconsistent as well)

zram."devnumber"."zram options" + an extra property of setswap as bool
this way we can get rid of the swap device count and if someone want to use many and multiple swap device for very ancient level old kernels or sth like that, they can have sth like
zram.swapcount and zram.swapsetting when setting a value n for swapcount all of these devices config would interfere with zram devnums of 0 to n-1 (and their setting would be based on swapsetting)
I saw similar mechanisms in nix even filesystem use somehow this mechanism, I think zramSwap name reflects the fact the authors really didn’t try to target any other thing really (and its even not recommended to be changed probably due to lack of support and inconsistencies like this)

Can we open an issue or pull request or sth? For those who know nix and NixOS mechanisms well, it’s probably a half an hour task or less!

While it wouldn’t be hard to implement, I don’t think much would happen until someone who is really interested in this feature would take it upon themselves to do this. I personally don’t really see the benefit from this other than having fun with kernel subsystems. Is zram more or less performant? What are the limits of its performance? How it compares to tmpfs? I would be interested in results of such research, but not in actually implementing this.

I also think that this approach might be misguided. Using tmpfs and swap on zram should be more flexible and performant. tmpfs is designed to run in memory, ext4, btrfs, whatever else - isn’t. tmpfs directly maps areas of memory to the file cache, any other fs doesn’t. If you don’t have enough space in tmpfs, swapping to zram should do pretty much the same as having a filesystem on top of zram, without the overhead from a filesystem designed for block devices. Just think about what path would a blob of data that you want to write or read would have to take in each case.


I don’t think implementing it need that level of skill, we are not rewritting it, just need small modification in nix options interface and a bit more support in backside

If no one, I’ll give it a try

I think all needed to be changed is in this

@Solene FWIW if you turn on ZFS autotrim, it will TRIM blocks as they become unneeded by ZFS. If zram implements freeing memory when blocks are TRIM’d, then it should be fine.