[lug] more RAID0 on /

Lee Woodworth blug-mail at duboulder.com
Sat Nov 20 15:24:48 MST 2004


D. Stimits wrote:
> Lee Woodworth wrote:
> 
>> D. Stimits wrote:
>>
>>> Lee Woodworth wrote:
>>
>>
>>
>> .....
>>
>>>> You didn't say whether you are unplugging the IDE drives when you 
>>>> are switching from IDE to SCSI mode. If the IDEs are still present, 
>>>> they could affect what is considered /dev/md0.
>>>
>>>
>> This is the error from /boot/vmlinuz... which is using an initrd? I 
> 
> 
> Vmlinuz has already loaded at this stage and a number of things have 
> already succeeded, including proof that /boot/ has been read fine and 
> that vmlinuz has in fact loaded. Yes, it uses an initrd, but all MD 
> functionality is hard coded into the kernel...those modules do not even 
> exist in the IDE install either, it works fine, plus the kernel was 
> compiled with newer features to build in the config settings...I can 
> absolutely guarantee that this kernel has device mapper and IDE hard 
> coded. I have decompressed and mounted the initrd on loopback and can 
> guarantee scsi is available that way, plus the /boot/ *is* on SCSI and 
> it would have never read grub or loaded vmlinuz if it had not. I 
Having /boot be accessable means the GRUB SCSI driver loaded and that 
GRUB recognized the partition and fs. The kernel drivers haven't been 
used yet, in fact on my system /boot isn't mounted at any time during 
the boot process. The MBR loaded /boot/grub/stage1 which then loaded 
/boot/grub/xfs_stage_1_5 which then loaded /boot/grub/stage2 which then 
loads /boot/vmlinuz-xxxx.
> guarantee absolutely that it has device mapper, IDE, SCSI, and MD 
> available as it boots. Not possible with how far it has succeeded to not 
> be.
> 
>> suspect that the md driver isn't in your kernel or hasn't been loaded 
>> from an initrd. I have a 2.6.8 kernel with the device-mapper 
>> compiled-in (CONFIG_BLK_DEV_MD=y, same flag for RAID Support) and the 
>> device 
> 
> Same here. Also the device mapper config is "y" not "m".
CONFIG_BLK_DEV_MD=m is supposed to mean it is compiled as module and not 
statically linked into the kernel (y is for static linking). Maybe the 
kernel build scripts are broken and it is statically linked into the 
kernel. Look in /lib/modules/2.6.9-*/kernel/drivers/ for dm_mod.ko to 
see if the driver was built as a loadable kernel module.

 From nm /usr/src/linux-2.6.9-*/vmlinux (the uncompressed, unstripped 
build result) I find:
   c02ba3fd t .text.lock.dm
   c02bec29 t .text.lock.dm_ioctl
   c02bbb29 t .text.lock.dm_table
   c02bc0a5 t .text.lock.dm_target
   c02b89eb t .text.lock.md
for the statically-linked device mapper and raid drivers.

I can't think of anything else that might be happening. From my dmesg 
(gentoo with udev and LVM, no initrd, no raidtools or raidstart):

....
md: linear personality registered as nr 1
md: raid0 personality registered as nr 2
md: raid1 personality registered as nr 3
md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
device-mapper: 4.3.0-ioctl (2004-09-30) initialised: dm-devel at redhat.com
....
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
....
XFS mounting filesystem dm-0
Ending clean XFS mount for filesystem: dm-0

I am using LVM (which uses device mapper) so there are no raid arrays 
found. But the issues are the same, the /dev/md* devices need to be 
setup before the root fs mount.





More information about the LUG mailing list