How do I mount multiple bcachefs devices on boot?

I’ve combined /dev/sda and /dev/sdb by running the command

bcachefs format /dev/sda /dev/sdb --replicas=1

Which worked fine, I can mount them manually doing

mount -t bcachefs /dev/sda:/dev/sdb /home/user/mount

But I can’t seem to get them to mount on boot in my nix configuration, I’ve tried the following

fileSystems.“/home/user/.SSD” =
{ device = “/dev/sda:.dev/sdb”;
fsType = “bcachefs”;
options = [“noatime” “nodev” “nosuid” “noexec” “noexec” “nofail”];
};

But that doesn’t work, most of the time the rebuild fails, but sometimes the rebuild works but on reboot its not mounted, I’ve also tried using the UUID, including the external UUID of the drive, but that doesn’t work either.

Thanks

Oh a new filesystem!

For some reason most of the documentation I found (Arch, Gentoo) doesn’t tell me what to put in /etc/fstab. Which is sad :cry: … however that should be straightforward, usually, from the mount command.

However from what I did see in your snippet, this line device = “/dev/sda:.dev/sdb” sure looks like it contains a . where it should contain a /, doesn’t it?

1 Like

That must have been a typo when creating the post, this is what it looks like

fileSystems.“/home/user/.SSD” =
{ device = “/dev/sda:/dev/sdb”;
fsType = “bcachefs”;
options = [“noatime” “nodev” “nosuid” “noexec” “noexec” “nofail”];
};

I did a rebuild and it mounted, but on reboot it wasn’t mounted, so I did another rebuild and this time it didn’t mount, weird.

yeah, weird. But it’s also a filesystem I never touched before. And some years back people it appears were using cron jobs to get it mounted a boot. So no idea where we’re at right now.

What’s the advantage of that one over something like btrfs or zfs?

Maybe I’ll just go back to mounting them as separate drives, that worked fine, I think performance is one advantage, I think there are others too, but they go beyond my area of expertise.

This is just a limitation in systemd currently. There’s also a nixpkgs issue about it.

Thanks for the info! Quite interesting!

But “currently” appears to be at almost 6 years now.

Maybe it’s a good idea not to use this with multiple devices in the meantime?

Well, there’s workarounds. You can make an imperfect equivalent to a mount unit, but with multiple devices, like this:

# foo-bar.service
# simulates foo-bar.mount, i.e. mounting at /foo/bar
[Unit]
Description=/foo/bar
BindsTo=dev-sda.device dev-sdb.device
After=dev-sda.device dev-sdb.device
RequiresMountsFor=/foo
DefaultDependencies=no
Before=umount.target
Conflicts=umount.target
After=local-fs-pre.target
Before=local-fs.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/nix/store/.../bin/mount -t bcachefs /dev/sda:/dev/sdb /foo/bar
ExecStop=/nix/store/.../bin/umount /foo/bar

So I wouldn’t say “it’s a good idea not to use this with multiple devices”. You just have a little work to do if you do :slight_smile:

1 Like

For the UUID syntax, unless I’m mistaken, you have to use mount.bcachefs for now.
Make sure to use the external UUID.
i.e: UUID=${bcachefs-uuid}

I mean I am not gonna use bcachefs, unless I see some real major benefit over ZFS and btrfs. :slight_smile:

I tried to find some data, but it sure doesn’t look like a revolutionary change if anything.

But thanks for posting this workaround @ElvishJerricco , maybe it helps @guzz46 to set something up on his end?

So how exactly do you create a custom systemd unit on nixos? I assume you don’t just put that in configuration.nix?

The nix equivalent of that unit would be

# configuration.nix
{ pkgs, ... }: {
  systemd.services.foo-bar = {
    description = "/foo/bar";
    bindsTo = ["dev-sda.device" "dev-sdb.device"];
    after = ["dev-sda.device" "dev-sdb.device" "local-fs-pre.target"];
    before = ["umount.target" "local-fs.target"];
    conflicts = ["umount.target"];
    unitConfig = {
      RequiresMountsFor = "/foo";
      DefaultDependencies = false;
    };

    serviceConfig = {
      Type = "oneshot";
      RemainAfterExit = true;
      ExecStart = "${pkgs.util-linux}/bin/mount -t bcachefs /dev/sda:/dev/sdb /foo/bar";
      ExecStop = "${pkgs.util-linux}/umount /foo/bar";
    };
  };
}

I can’t get that work either, the systemd unit builds fine, and is enabled, but after reboot its still not mounted, when running systemctl start foo-bar.service nothing happens for a while, and then I get the error

foo-bar.service: Job foo-bar.service/start failed with result 'dependency

This is what the unit looks like

systemd.services.foo-bar = {
  description = "/foo/bar";
  bindsTo = ["dev-sda.device" "dev-sdb.device"];
  after = ["dev-sda.device" "dev-sdb.device" "local-fs-pre.target"];
  before = ["umount.target" "local-fs.target"];
  conflicts = ["umount.target"];
  unitConfig = {
    RequiresMountsFor = "/foo";
    DefaultDependencies = false;
  };

  serviceConfig = {
    Type = "oneshot";
    RemainAfterExit = true;
    ExecStart = "mount -o noatime,nodev,nosuid,noexec -t bcachefs /dev/sda:/dev/sdb /home/user/.SSD";
    ExecStop = "umount /home/user/.SSD";
  };
};

I tried using ${pkgs.util-linux}/bin/mount, and mount.bcachefs but that doesn’t work either

Oh, my bad; to have it run at boot you also need wantedBy = ["local-fs.target"];

I’m not sure why it would fail with “dependency”. Are those the right device names for your system? What does systemctl --failed print?

When doing a rebuild it hangs for about a minute and a half after setting up tmpfiles, systemctl --failed says 0 loaded units listed

And systemctl status foo-bar.service just says foo-bar.service: Job foo-bar.service/start failed with result ‘dependency’.

It doesn’t say why, they’re the right device names, because it mounts fine if I run the command in the terminal mount -o noatime,nodev,nosuid,noexec -t bcachefs /dev/sda:/dev/sdb /home/user/.SSD

OK I finally found a solution

systemd.services.mount-ssd = {
description = “mount SSD”;
script = “/run/current-system/sw/bin/mount -o noatime,nodev,nosuid,noexec -t bcachefs /dev/sda:/dev/sdb /home/user/.SSD”;
wantedBy = [ “multi-user.target” ];
};

When I do a rebuild now I get a warning error from the unit about the device or resource being busy, but the rebuild completes.