Most recent kernel where this bug did not occur:
SiL 3112 SATA controller
Seagate 80GiB SATA x 2
Very basic install, no software installed except the bare gentoo system.
Grub as boot loader
kernel is built with no modules and module support turned off
kernel is built with support for raid0, raid1, md, autmounter4, scsi, SiL3112,
I have two sata drives and i want my root to be a software raid-0 (striped).
I created a partition layout like so:
/dev/sda1: 100MiB TYPE 83 (linux)
/dev/sda2: 8GiB TYPE 82 (swap)
/dev/sda2: 70GiB TYPE FD (linux MD autodetect)
/dev/sdb1: 100MiB TYPE 83 (linux)
/dev/sdb2: 8GiB TYPE 82 (swap)
/dev/sdb2: 70GiB TYPE FD (linux MD autodetect)
I prepared a boot partition by making a file system on /dev/sda1
I prepared my root partition as an md device like so:
mdadm -A -a yes /dev/md0 -l 0 -n 2 /dev/sda3 /dev/sdb3
#(ASSEMBLE, create missing device file automatically, raid level 0, number of
devices = 2, device list)
I verified that it worked with cat /proc/mdstat and build a filesystem on top
of it. I preceeded with mounting it and installing gentoo like usual.
my grub.conf says:
kernel /vmlinuz root=/dev/md0
Everything went smooth untill reboot.
md: Autodetecting md devices
md: Autodetection complete
(indicating that no md devices were found)
A few lines later i get:
md: unknown device "md0" or unknown block device (9,0)
VFS: Kernel panic (something about root= parameter not set right).
I have done the EXCACT same install procedure successfully several times before
so this puzzled me. After immense digging and troubleshooting (and many
reinstallations) I found a hint on the internet, that this might be caused by
the latest mdadm switching from v0.7 to v1.0 superblock ,and that the latest
kernel might not discover them at boot. I found that this is handled in the /usr
Hopefully this will be fixed so that I may 1. avoid using initrd and 2. avoid
reinstalling with 0.7 superblocks.
Steps to reproduce:
I discovered a few typos.
First i didnt use the assemble command with mdadm to create the array, i used -
C for create.
Second the error message just before kernel panic is a "from my memory".
Nontheless all the important datas are there.
i've the same problem, after updating kernel 2.6.14.x to 18.104.22.168... with
22.214.171.124 no raid-problems (but security related ones).
1- in-kernel autodetect does not work with version-1 metadata. This is by design.
You should be using 'mdadm -As' or similar in an initrd if you are using
2- mdadm does not default to version-1 metadata unless it is told to. It can be
told to by a 'CREATE' line in mdadm.conf. If you are using Debian, you
probably have that CREATE line, but you should also get a kernel with
an initrd that does the right thing.
3- Upgrading the kernel should have no effect here. If it really does, more
details are needed.
I suggest you "get with the program" and use an initrd. There are instruction
in the mdadm release for building an ultra-simple initrd just for md.
Or you can read the "mkinitramfs" (or whatever) doco for your distribution
and do it that way. It really isn't very hard.
"mdadm --examine /dev/hdb1
Magic : a92b4efc
Version : 00.90.00
UUID : 39006600:1cbd44af:086b834c:a4495afa
Creation Time : Sun Jun 25 13:53:28 2006
Raid Level : raid1
Device Size : 32000 (31.26 MiB 32.77 MB)
Array Size : 32000 (31.26 MiB 32.77 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Wed Jan 24 05:30:39 2007
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 35f3b3d5 - correct
Events : 0.466
Number Major Minor RaidDevice State
this 0 3 65 0 active sync /dev/hdb1
0 0 3 65 0 active sync /dev/hdb1
1 1 22 1 1 active sync /dev/hdc1"
If I read this info correctly I have a 0.9 metadata!?
My problem reported above doesn't exist any more, with vanilla kernel 126.96.36.199
it works again.
Thanks for this information.