How to treat your SSD

_:
{
  environment.etc."lvm/lvm.conf".text = lib.mkForce ''
    devices {
      issue_discards = 1
    }
    allocations {
      thin_pool_discards = 1
    }
  '';
  services.fstrim.enable = true;
}

Makes me wonder why there’s no mention that lvm doesn’t let discards through to the disk by default!

EDIT: Seems like i had another misconfiguration solved at the same time according to posts. Will dig deeper :slight_smile:

Enjoy :slight_smile:

Huh, that’s interesting. I guess, to add to this, if your root FS is on LVM, you probably need to find a way to add this sort of thing to stage 1?

Even though I am on ZFS+LUKS I checked and I didn’t have TRIM enabled… oof, thanks for the post!

EDIT: talked to some ZFS people over on IRC, recommended me to not enable autotrim but do a weekly zpool trim POOL timer

note that you also need to enable discards on the LUKS device, in case you weren’t aware.

I’m curious why they recommended against autotrim. I remember when trim was added, they did some benchmarks and showed that it doesn’t have a performance cost. A regular zpool trim is still recommended, because autotrim doesn’t trim everything possible. But it can only help, as far as I was aware.

According to the arch wiki, this doesn’t quite do what you described, and may not be desirable, Solid state drive - ArchWiki

It seems that normal discards from the fs are passed through fine by default.

Consider adding some appropriately general version of that LVM config to <nixos-hardware/common/pc/ssd>?

I was specifically asking about stability, manual trim is apparently safer.

I can’t imagine why it would be safer?

My guess would be that it doesn’t happen randomly and if there is a fault since it happens more in bulk youre more likely notice and be able to rollback the uberblock in time? Not sure tbh

The way i discovered this was checking the log of fstrim, which I’m running through services.fstrim.enable saying there’s no filesystems to TRIM other than the ones that aren’t LVM mounted.

If you’re refering to

Warning: Before SATA 3.1 all TRIM commands were non-queued, so continuous trimming would produce frequent system freezes.

If any poor soul is still using stuff older than SATA3.1 then I’m sorry for the advice :stuck_out_tongue:

I don’t know, these configuration options don’t look like they do what you say they do:

devices/issue_discards - Issue discards to PVs that are no longer used by an LV.
allocation/thin_pool_discards - The discards behaviour of thin pool volumes.

The former seems to be described as only discarding full physical volumes, rather than individual files/blocks (which may be sensible, but I think that’s a rare case, and I’d assume there’s a reason this isn’t true by default), and the latter applies only to thin pools, which aren’t the typical lvm setup.

The latter isn’t supposed to be a boolean, either, so I wonder if that setting is even correct. At best I’d guess you’re setting it to nopassdown, which sounds like it just doesn’t pass discards to the underlying device, thereby doing the exact opposite of what you wanted:

       --discards passdown|nopassdown|ignore
              Specifies how the device-mapper thin  pool  layer  in  the
              kernel  should  handle  discards.   ignore causes the thin
              pool to ignore discards.  nopassdown causes the thin  pool
              to  process discards itself to allow reuse of unneeded ex‐
              tents in the thin pool.  passdown causes the thin pool  to
              process  discards  itself  (like  nopassdown) and pass the
              discards to the underlying  device.   See  lvmthin(7)  for
              more information.

It’s also set to passdown by default, which should properly handle discards sent by fstrim according to the lvmthin man page:

       passdown: Process discards in the thin pool (as with nopassdown),
       and pass the discards down the the underlying device.  This is
       the default mode.

Are you sure this wasn’t caused by misconfiguration elsewhere?

1 Like

Edited post, I’ll test further! :slight_smile:

Oh… i do have something on sata 2, should probably check and make sure it’s one of the spinning rust boxes instead of an ssd… today I learned something again!