Split brain and raid1 mdadm or lvm

Recently I created a raid1 using lvm.
However after removing one then the other disk then adding both again I noticed that disk1 almost silently mirrored to disk2 which is bad if disk1 starts up only sometimes.

Then I asked on lvm mailinglist who told me about a thread who told me about a --no-degraded flag.

First I tried to create a test case for nixos.
The issues I ran into https://dpaste.com/GC9MFK8R6

  1. /dev/vda seemsto have ext2.
  2. creating the filesystsem on md0 failed (write errors?)

Then I tried to create a test srcipt I can run from console without VM cause it’s faster. https://mawercer.de/tmp/tmp/mdadm-raid1-test.sh
But I cannot reproduce the results. Sometimes I get the ls output sometimes I don’t.

Now maybe using losetup is the issue ? I thought it was smart cause it’s fast and doesn’t require a reboot.

The goal I have is finding a raid errors if both disks were used on its own and get joined again cause nobody wants a system which overwrites updates randomly.

Any help appreciated.

I’d like to create the same test for lvm and btrfs raids.
Then it can serve as future reference for all users about how to setup use and recover from disk failures with raids.