[lug] LVM and disk failure

Nate Duehr nate at natetech.com
Sun Jan 8 07:44:58 MST 2006


Daniel Webb wrote:
> I've been Googling for the answer to this and failing, so:
> 
> What happens when you have a 2-disk LVM volume group and disk 1 fails?
> Obviously this will depend on the filesystem you put on top of the volume,
> right?  So which filesystems will recover gracefully if you chop them in half
> like that?
> 
> It's a little disturbing that in all the documentation I've read on LVM this
> is never mentioned, and yet it seems to destroy the main purpose of lvm: to be
> able to add and remove disks to a volume easily.  Each physical volume you add
> makes it that much more likely that you'll lose the whole thing.  Sure, you
> can put it on top of RAID, but now you lost your size flexibility because RAID
> isn't so easy to resize (or is it?).  The snapshots feature is nice, that's
> all I'll use it for until I find a satisfactory answer to this question.
> 
> I also was checking out evms and it looks very interesting.  Any impressions
> from those who have used it?  Is it stable/reliable?  I didn't see anything in
> their docs either about recovering when one disk in a volume fails.

I'm not sure why this is a surprise to you...

LVM is similar to RAID-0 in this light... if you lay a filesystem on top
of two disks instead of one, losing one... you still lose your filesystem.

Backups... backups... backups.

LVM is a convenience when you can't find a single disk big enough for
your data and HAVE to have it all in the same filesystem.  The actual
NEED for LVM to do that is fairly rare.

It's real strenghth is in resizing filesystems, if the filesystem itself
can handle it.  When you have a large disk but aren't sure how big each
of the mountpoints should be...

But it's definitely never a replacement for fault-tolerance tools like
RAID-1 and RAID-5... and mostly it's just an abstraction layer to allow
you to treat your physical disks in whatever fashion you like.

LVM is only PART of the disk admin's toolkit!  :-)

Nate



More information about the LUG mailing list