[lug] more RAID0 on /

D. Stimits stimits at comcast.net
Sat Nov 20 18:00:04 MST 2004


Lee Woodworth wrote:
> D. Stimits wrote:
> 
>> Lee Woodworth wrote:
>>
>>> D. Stimits wrote:
>>>
>>>> Lee Woodworth wrote:
>>>
>>>
>>>
>>>
>>> .....
>>>
>>>>> You didn't say whether you are unplugging the IDE drives when you 
>>>>> are switching from IDE to SCSI mode. If the IDEs are still present, 
>>>>> they could affect what is considered /dev/md0.
>>>>
>>>>
>>>>
>>> This is the error from /boot/vmlinuz... which is using an initrd? I 
>>
>>
>>
>> Vmlinuz has already loaded at this stage and a number of things have 
>> already succeeded, including proof that /boot/ has been read fine and 
>> that vmlinuz has in fact loaded. Yes, it uses an initrd, but all MD 
>> functionality is hard coded into the kernel...those modules do not 
>> even exist in the IDE install either, it works fine, plus the kernel 
>> was compiled with newer features to build in the config settings...I 
>> can absolutely guarantee that this kernel has device mapper and IDE 
>> hard coded. I have decompressed and mounted the initrd on loopback and 
>> can guarantee scsi is available that way, plus the /boot/ *is* on SCSI 
>> and it would have never read grub or loaded vmlinuz if it had not. I 
> 
> Having /boot be accessable means the GRUB SCSI driver loaded and that 
> GRUB recognized the partition and fs. The kernel drivers haven't been 
> used yet, in fact on my system /boot isn't mounted at any time during 
> the boot process. The MBR loaded /boot/grub/stage1 which then loaded 
> /boot/grub/xfs_stage_1_5 which then loaded /boot/grub/stage2 which then 
> loads /boot/vmlinuz-xxxx.
> 
>> guarantee absolutely that it has device mapper, IDE, SCSI, and MD 
>> available as it boots. Not possible with how far it has succeeded to 
>> not be.
>>
>>> suspect that the md driver isn't in your kernel or hasn't been loaded 
>>> from an initrd. I have a 2.6.8 kernel with the device-mapper 
>>> compiled-in (CONFIG_BLK_DEV_MD=y, same flag for RAID Support) and the 
>>> device 
>>
>>
>> Same here. Also the device mapper config is "y" not "m".
> 
> CONFIG_BLK_DEV_MD=m is supposed to mean it is compiled as module and not 
> statically linked into the kernel (y is for static linking). Maybe the 

Yes, I intentionally static linked them so that they would be available 
regardless of initrd loading. The device mapper and RAID0 and ext2 are 
statically availble at all times. The only modules provided in initrd 
are scsi and ext3, and those are without any doubt at all in the initrd 
(take for example the ability to mount /boot/ this would not even be 
possible if the scsi driver were not available on initrd...and under the 
IDE install ext3 works as well...the message is always one of an invalid 
device and NOT bad filesystem type).

> kernel build scripts are broken and it is statically linked into the 
> kernel. Look in /lib/modules/2.6.9-*/kernel/drivers/ for dm_mod.ko to 
> see if the driver was built as a loadable kernel module.
> 

Doesn't exist, it was hard wired into the kernel. I have played with 
device mapper functionality from the IDE drive, so I know it passes the 
acid test: It works.

>  From nm /usr/src/linux-2.6.9-*/vmlinux (the uncompressed, unstripped 
> build result) I find:
>   c02ba3fd t .text.lock.dm
>   c02bec29 t .text.lock.dm_ioctl
>   c02bbb29 t .text.lock.dm_table
>   c02bc0a5 t .text.lock.dm_target
>   c02b89eb t .text.lock.md
> for the statically-linked device mapper and raid drivers.
> 

All of these are present in vmlinux.

> I can't think of anything else that might be happening. From my dmesg 
> (gentoo with udev and LVM, no initrd, no raidtools or raidstart):
> 
> ....
> md: linear personality registered as nr 1
> md: raid0 personality registered as nr 2
> md: raid1 personality registered as nr 3
> md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
> device-mapper: 4.3.0-ioctl (2004-09-30) initialised: dm-devel at redhat.com
> ....
> md: Autodetecting RAID arrays.
> md: autorun ...
> md: ... autorun DONE.

It does all this fine when I boot to IDE. If I use KRUD in rescue mode I 
have to manually raidstart /dev/md0 first, it fails in that case to 
autodetect (no errors given, it just doesn't seem to even try).

> ....
> XFS mounting filesystem dm-0
> Ending clean XFS mount for filesystem: dm-0
> 
> I am using LVM (which uses device mapper) so there are no raid arrays 
> found. But the issues are the same, the /dev/md* devices need to be 
> setup before the root fs mount.
> 

Just curious, is this LVM2? Initially I considered that probably just 
RAID0 without LVM was simpler...I still believe that if I can't boot 
RAID0 I won't be able to boot RAID0 inside of LVM, yet it might need 
more user space tools in the initrd. The thing that worries me about 
additional technologies like LVM or EVMS is that rescue might be 
complicated by this, adding special requirements to rescue media.

D. Stimits, stimits AT comcast DOT net



More information about the LUG mailing list