As per my own experimentation, and Tobias Holst as reported here:
If you re-add a drive removed from a raid5 array, likely after it's been rebalanced (buy maybe just modifying it without rebalance is enough), two bad things happened:
1) As soon as my drive re-appeared on the bus, btrfs noticed it and automatically re-added it. This is likely unwelcome and dangerous since the drive is out of sync with the rest of the array
2) In my experience, the filesystem regressed to an earlier state, and a directory I had added while the drive was gone, disappeared after I re-added the drive.
That said, fixing #1 should be enough: once a drive was kicked out of an array and the array has anything written on it, it should have a newer generation number and the drive with the older generation number should not be accepted back, unless it's initialized, or you use some special recovery option to rebuild a raid with too many missing drives and re-add one that is older than wanted, but better than nothing at all.
any update on this bug ?
still current as of Linux 4.2 ?
On Mon, Nov 16, 2015 at 12:14:00PM +0000, firstname.lastname@example.org wrote:
> any update on this bug ?
> still current as of Linux 4.2 ?
I have no idea, I haven't used swraid5 on btrfs in a long time.
You can try it out and report back.
I tested this with RAID6 and can confirm that this bug is still present in kernel 4.4 (Ubuntu 16.04 LTS).
After a disk became unavailable, I took it out and reinserted it into the slot. BTRFS then automatically started using the disk again as if nothing had happened. After running a scrub the entire filesystem became corrupted beyond repair.
On IRC it became clear that quite a few people have experienced this. Afterwards the wiki was updated to reflect the current state of RAID5/6 and the following text was added:
"The parity RAID code has multiple serious data-loss bugs in it. It should not be used for anything other than testing purposes."
If it is scrub that screws up your btrfs, I think this patch would help a bit.
Assuming we're using the default copy-on-write mode, unless the failed disk pretends to get every write down to it (i.e. it didn't report an error to the upper layer), the data would not be found since an error showed up during writing back data to disk so that the related metadata would not be updated. On the other hand, if filesystem's metadata got errors when being written to disk, the filesystem would(or should) flip to readonly.
I can confirm this bug is still present in kernel version 5.10.84. In this version, btrfs will occasionally refuse to mount if it detects a generation version that's older, but there doesn't seem to be a way to add the disk back to the array without wiping the disk and using `btrfs device replace`.