[ale] ATL Colocation and file server suggestions
Ken Ratliff
forsaken at targaryen.us
Tue Jan 20 00:59:13 EST 2009
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> However, software RAID 1, 10 is excellent and performance compatible
> with a hardware card.
I still prefer to do RAID 10 on hardware. I've found software raid to
be pretty finicky, drives dropping out of the array for no good
reason, and you don't notice it until the slight performance hit for
the rebuild makes you go 'hm.'
I actually don't like RAID 10 at all. I'd rather toss the 4 drives
into a RAID5 and get more space. Sure, a RAID 10 will allow you to
survive 2 dead drives, as long as it's the right 2 drives. I've seen
both drives of one mirror fail in a RAID 10 a few times, and that has
pretty much the same result as 2 dead drives in a RAID 5.
Software RAID1 I have no problem with though. It's quick, easy, the
performance hit is negligible unless you have something that's really
pounding the disk i/o and as someone else mentioned, being able to
split the mirror and use them as fully functional drives does
occasionally have it's uses.
> At a previous job where many, MANY drives were installed in many
> <confidential number> machines, it was determined that RAID 5 was a
> detriment. 2 failed drives was system death. The recovery time on
> large (>500GB) drives was problematic enough that second drive failure
> was less than a negligible probability given that all drives were same
> make and model.
Yeah, we found out the hard way that software RAID5 is a very very bad
idea, especially if you're running it on a high activity web server.
After enough times of having a drive in software raid5 die before
you're done rebuilding from the previous drive failure, you kind of
learn that maybe this isn't such a good idea (or you tell the night
crew to turn apache off so that the array can rebuild in peace, but
that's not something properly spoken of in public!). The performance
hit for software RAID5 just isn't worth implementing it.
Now with that being said, no form of RAID is truly safe. I had a
server today drop both drives in one of it's RAID1's. They were older
36 gig SCSI's, so it was about time anyway, but losing both of them
meant I got to spend time flattening the box and reinstalling it. This
is also why I try to avoid using drives from the same manufacturer and
batch when building arrays, as well. If you don't, you better pray to
god that the rebuild completes before the next one dies. It's said
that RAID is no substitute for a proper backup, and that's true. (And
my life being somewhat of an essay in irony, the box that dropped both
drives in the mirror today was being used as a backup server.)
(Also, I'm not preaching at you, Jim, I'm sure you know all this crap,
I'm just making conversation!)
> RAID 1 recovery is substantially quicker and drives
> are low cost enough to not need the N-1 space of RAID 5.
All depends on your storage needs. We have customers with 4 TB arrays,
6 TB arrays, and one with an 8.1 TB array (which presents some
interesting challenges when you need to fsck the volume.... why we
used reiser for the filesystem on that array, I have no idea). Those
are a little hard to do in RAID 1 :)
For now, anyway.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (Darwin)
iEYEARECAAYFAkl1aDcACgkQXzanDlV0VY7TaQCgudbJzxiWLSVS7KKooyFOQSZO
3O8An0qpqLdOUcTeWV3jg6ZTSiJ+q+bs
=NI//
-----END PGP SIGNATURE-----
More information about the Ale
mailing list