[lug] RAID installation on Fedora 6 Zod
Sean Reifschneider
jafo at tummy.com
Mon May 21 03:03:15 MDT 2007
On Wed, May 16, 2007 at 09:21:32PM -0600, Nate Duehr wrote:
>You seem very reluctant to admit that Linux software RAID up until
Oh, that's easy to explain. I'm reluctant to admit that because I don't
agree with you. There are a lot of very crappy RAID implementations out
there. The DPT controllers a decade ago had incredibly crappy performance,
basically no tools for management directly under Linux, and even today the
current generation of that hardware isn't much better, particularly in the
driver and software side. Another Adaptec controller I recently had to
replace a drive in was so obtuse that it took significant time to just
figure out how to get it to replace the drive. 3ware has hast in the
recent past required that if a drive falls out of the array that you
replace it with a drive with a different serial number, not even wiping the
existing drive will let you try re-using the drive in that array. Or the
other system where replacing the drive required it to sit in the BIOS until
it had rebuilt the array...
Setup of arrays for arrays is not something that is at all integrated
into any of the installers that I've ever seen, except for Linux software
RAID.
Most importantly to having a reliable production system: the software RAID
is the only one I know of that will send you an alert by e-mail using the
default installation on any Linux distribution, even today. We've had
plenty of calls from people who installed RAID but didn't have time to set
up the monitoring tools, so the notification they got was when the second
drive in the array failed and it stopped working.
>Today, it's better -- but it just "got there" only recently. People
I've been running production systems using Linux software RAID successfully
for 6+ years now. In my experience, it's been totally usable for many
production tasks for quite some time now.
>5 years ago, you couldn't have said anything good other than it was
>cheap, about Linux software RAID. You also couldn't get (easily
Not true. 5 years ago software RAID was even better than most hardware
RAID as far as software tools for manipulating it, getting status under the
OS, and integration with the installer. It also has better performance
than a lot of hardware implementations, especially when you don't have them
set for "highest performance but I'll probably eat your data" mode. For
example, 3ware doesn't seem to use the second drive in RAID-10 pairs for
reads unless the primary drive is offline.
>No offense to you or your organization, but most companies today
>would prefer not to have to hire talent to set up RAID 5. They'd
Then they're probably even better off using software RAID in that case.
Because their most important requirement is probably that they set up the
array and automatically get notified when a drive dies. Because otherwise
they'll only get notified when the second one fails -- by their users.
So, yeah, maybe they can get a NetApp filer that will call NetApp when or
possibly even before a failure. But spending $50k on a hardware solution
so they won't have to hire talent to set up RAID is still hiring talent to
set up RAID.
>But here's the kicker... that box was a Solaris 8 box. The OS and
>all the commands to do that were available in February of 2000.
>Linux software RAID in February of 2000 was atrocious.
If you say so. The problem you describe sounds very much like how it would
have been handled with Linux software RAID, except that the spare drives
you mention probably would have been set up as hot spares or just another
mirror in which case zero commands would have had to have been done to get
the end state you mentioned.
Now, if you were using ATA drives there may have been problems with the
system becoming unresponsive due to the drive failure. But, I bet your Sun
box was using SCSI discs, so it may not have been a problem under Linux
software RAID either...
>We're not saying Linux software RAID is "bad", or "hasn't gotten
I guess I definte "atrocious" a bit more harshly than you do or
something...
>better" -- we're saying we trust what we've been using (and has a
Yep, there you go. Been using Linux software RAID for over a decade and
it's worked quite well. As far as Linux software RAID being a moving
target, I have no idea what you're thinking of. It doesn't feel like it's
chagned that much since I started using it. Sure, it recently got RAID-10
support directly in the tools, before that you had to use LVM to give you
RAID-10 or RAID-50 equivalence... I wouldn't exactly call that a moving
target though.
What it gets down to is that I don't think Linux software RAID is nearly as
bad as what I interpret you to be saying it is, and my experience has been
that hardware RAID systems are far from trouble-free or even in many
situations better than the Linux software RAID.
There are definitely some problems with Linux software RAID. The biggest
probably being that it's on a partition-by-partition basis, instead of a
drive by drive basis. In particular, the boot sector isn't automatically
mirrored, and Red Hat Enterprise 4 wasn't properly installing the boot
blocks on both drives, let alone re-installing it if you replaced one of
the drives.
Your data is going to be safe, probably won't even impact operation,
but during your maintenance window to test the rebuilt array (which
you may not even need with a hardware array), you may have to do some
additional work. Of course in an environment where they would have
spent $50k on an "entry level RAID box", they probably have operation
guides for these things anyway and tested and documented the procedures
for this fairly likely occurrence...
>And ZFS is flat-out brilliant. It's really too bad Sun's so
>mismanaged these days... they still put out a very nice OS and lots
>of tools from people that really understand a zero-downtime mentality.
I agree, ZFS is brilliant. It's also been quite buggy, but at this point
it seems to be largely usable. It's performance isn't as good as what I
hear people were expecting, particularly multi-user access seems to be
blocked fairly hard on something like kernel locks. And the "verify my
data is all consistent" seems to not work very well on more than trivial
amounts of data. Around 1TB it takes a very long time to do the verify,
and then tends to get basically all the way through and just start over
from scratch, instead of completing, without indicating why or if there's a
problem.
I was tempted to try to use ZFS for our recent rebuild of
mirrors.tummy.com, because it would have been nice to take a snapshot of
a distro before starting the rsync from an upstream and using that for what
is served publicly. It also would be nice to keep some snapshots of older
states of the distros, to keep stuff around that is getting rolled off from
upstreams... But, of course, Solaris doesn't support the controller in
that box, and I'm not really sure I want to maintain a publicly accessible
Solaris box. We're a Linux company and don't have (probably don't want)
the experience to maintain Solaris on the public Internet.
ZFS is definitely very cool, I'm just waiting for it to show up in Linux...
Sean
--
If I wrote a file-system, it would have a super-duper block.
-- Sean Reifschneider, 2003
Sean Reifschneider, Member of Technical Staff <jafo at tummy.com>
tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability
More information about the LUG
mailing list