the commit a91a2785b200864aef2270ed6a3babac7a253a20 break mounting of my root (on LVM on a RAID10).
Reverting the above commit with last available kernel (2.6.38-08817-g45699a7) will solve the situation.
See #24012 for some info about my configuration (and previous good dmesg). In next hours I will post also the good and bad dmesg.
Created attachment 52362 [details]
dmsg for good kernel
Created attachment 52372 [details]
dmesg for bad kernel
On console I saw:
mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
Failure: failed to start /dev/md0
Volume group "cateee" not found
Skipping volume group cateee
Unable to find LVM volume cateee/root
Then "usual" sulogin
commit a91a2785b20086 changed drivers/md/raid10.c (and other md_integrity_register callers) to check/propagate the md_integrity_register return value -- previously the return was ignored.
In contrast to DM, MD now requires a device be integrity capable by checking bdev_get_integrity().
It is fine to have md_integrity_register() fail with error but it should _not_ prevent the ability to start the MD array.
Long story short, seems the elevated importance of md_integrity_register()'s return needs to be reverted throughout all MD callers.
Well, I'd rather handle the none-of-the-devices-are-capable case correctly.
I'll conjure up a patch...
(In reply to comment #4)
> Well, I'd rather handle the none-of-the-devices-are-capable case correctly.
> I'll conjure up a patch...
Sure, I was suggesting reverting the error propagation as a stop-gap. A real fix is even better ;)
First-Bad-Commit : a91a2785b200864aef2270ed6a3babac7a253a20
Fixed by commit 89078d572eb9ce8d4c04264b8b0ba86de0d74c8f md: Fix integrity registration error when no devices are capable