[lug] RAID redundancy with boot sectors?

Dan Ferris dan at usrsbin.com
Sat Nov 25 21:04:07 MST 2006


With Redhat based distros, you can only make the /boot partition on a 
RAID 1 if you decide to use RAID.  You can't make it anything except for 
RAID 1 if you want to use RAID.  When Redhat goes to format the 
partitions, it will install grub on both partitions that are going to be 
the /boot RAID 1 mirror. 

I've had it screw up as well.  When that happens, you just use dd like this:

dd if=/dev/sda of=/dev/sdb bs=512 count=1

If the boot drive dies, you just go into the BIOS and tell the PC to 
boot off of the second drive and you should be good to go.  It works 
especially great with SATA drives.  I've done this procedure twice.  
Once where Fedora didn't install grub on both drives and I had to 
manually install it , the second time it worked flawlessly.  Since your 
/boot is mirrored, the kernel images will all be where you expect, you 
just have to make sure to install grub.

This should work even if you are using RAID 5, since you can only mirror 
/boot.

Incidently, this is a good reason to always have a seperate /boot partition.

Another thing is that if you have a RAID 1 mirror and a drive dies, you 
can swap the bad one out, dd copy the partition table from the good 
drive to the new drive and you should be able to rebuild the mirror very 
rapidly.

Dan Ferris

D. Stimits wrote:
> I'm curious about something with software RAID on linux. You can 
> easily make partitions redundant by using something like RAID 1/5/10 
> instead of just partitioning directly on a drive. What's the right way 
> to back up the boot sector itself? Using software RAID 10 as an 
> example, I believe that since the BIOS and standard non-RAID 
> controllers do not have any knowledge of RAID, that there is no 
> possibility of automatically including the boot sector within the 
> RAID. Would installing the boot record onto the MBR of the first 
> physical drive, then doing a dd of a 512 byte block to all other RAID 
> drives at least make it possible to yank the first drive out and move 
> another drive into its place in order to get boot ability back? Or 
> would that also mess up the RAID itself (and assuming LVM is being 
> used, LVM too)? I'm just thinking that part of the RAID data or LVM 
> data could make the MBR itself different from one physical drive to 
> the next. I could imagine simply keeping a 512 byte dd copy of the MBR 
> of each drive on a floppy and restoring via dd if the drive ever got 
> replaced.
>
> D. Stimits, stimits AT comcast DOT net
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
>
>

-- 
#Consensual Unix
# unzip ; strip ; touch ; finger ; mount ; fsck ; more ; yes ; umount ; sleep


There is a time and a place for use of weapons. Miyamoto Musashi, 1645




More information about the LUG mailing list