[lug] SW Raid question
Lee Woodworth
blug-mail at duboulder.com
Tue Jul 24 21:59:17 MDT 2007
dio2002 at indra.com wrote:
>> dio2002 at indra.com wrote:
>> [...]
>>> Lets say i go ahead and enable the compatability mode right now so i can
>>> install my sw raid / lvm setup and get up and running. Lesser
>>> performance
>>> isn't going to kill me right now.
>> Can grub boot off LVM now? Didn't used to. Or do you /boot elsewhere?
>
> i've been combing the net for the last couple of days about grub, raid,
> lvm and what i do know is that grub can boot md raid but the filesystem
> must be ext3. not sure about lvm. i dont' think so.
>
> for my example, i intended to have lvm on top of a separate md raided root
> partion. and i am actually thinking about scrapping using lvm all
> together. not sure it really buys me much since my system is limited to 2
> drives anyway. and i wouldnt' be able to add an PVs.
What LVM buys you is being able to grow a file system. Think about what it would
take to grow your root fs. You can use LVM for the non-root fs file systems
easily. A potential issue is in your distros start sequence relating to whether
the raid tools get started before the lvm ones or vice-versa.
Sean would be able to say more about sw raid. If you make the raid partitions
of the correct type (linux raid auto) then the controller drivers (ide, sata)
can automatically create the raid device during disk detection. This
creates the /dev/mdxx devices for the arrays. Then you can mount LVM volumes
from volume groups built out of physical volumes that use /dev/mdxxx.
Gentoo's default start order is:
RC_VOLUME_ORDER="raid evms lvm dm"
Our file server:
Partitions:
/dev/hda7 3190 16563 107426623+ fd Linux raid autodetect
/dev/hdc7 1975 15348 107426623+ fd Linux raid autodetect
SW Raid (Mirror) automatically created device is /dev/md0:
/dev/hda7: device 0 in 2 device active raid1 /dev/md0.
/dev/hdc7: device 1 in 2 device active raid1 /dev/md0.
Physical volume:
/dev/md0 vgmd0 lvm2 a- 102.45G 8.45G
Volume Group:
vgmd0 1 19 0 wz--n- 102.45G 8.45G
Logical Volumes:
data1 vgmd0 -wi-ao 4.39G
data2 vgmd0 -wi-ao 4.39G
data3 vgmd0 -wi-ao 4.39G
data4 vgmd0 -wi-ao 4.39G
data5 vgmd0 -wi-ao 4.39G
Mounted file systems:
/dev/mapper/vgmd0-data1 4597760 4095392 502368 90% /storeg/data1
/dev/mapper/vgmd0-data2 4597760 4097976 499784 90% /storeg/data2
/dev/mapper/vgmd0-data3 4597760 3719376 878384 81% /storeg/data3
/dev/mapper/vgmd0-data4 4597760 3306280 1291480 72% /storeg/data4
/dev/mapper/vgmd0-data5 4597760 3049128 1548632 67% /storeg/data5
More information about the LUG
mailing list