[ale] Onboard RAID

Lightner, Jeff JLightner at water.com
Thu Nov 17 11:50:34 EST 2011


That might have been the old Adaptec OEM PERC.   I never used those PERC2 controllers and the fact they existed caused me much confusion some time back when I was trying to troubleshoot an LSI OEM PERC2.  I’ve not seen this with any of the LSI OEM stuff which is everything from some of the PERC2 and above (they’re up to PERC6 at least nowadays).

Judging computer technology by what it did 10 years ago hardly seems reasonable to me.   I’ve been using the PERC stuff for over 7 years now and haven’t seen any irrecoverable failures for supported equipment.

________________________________
From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jim Kinney
Sent: Thursday, November 17, 2011 11:37 AM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] Onboard RAID


-2 on perc
Multiple perc failures on a wide array of systems all with same root process: at the point in the battery exercise cycle where it was fully discharged, it would drop a drive from the array and all disks were dirty because the exercise cycle opened a block on each drive on the array to store a timestamp.
Granted these were all old scsi systems but they were the "the high class item" at the time (2001-2002).
On Nov 17, 2011 9:17 AM, "Lightner, Jeff" <JLightner at water.com<mailto:JLightner at water.com>> wrote:
+1 on PERC

We've been using the PERC controllers here for years in all PC based systems (Windows, Linux, FreeBSD) and haven't had an extended downtime due to using (real) hardware RAID.  Note that the PERC stuff is OEMed from LSI so even if you're not using Dell PowerEdge you can get one.   However, it sounds like you don't have any money so you should probably go with Software RAID.   We ran some systems without PERC on Software RAID for several years with no major issues.

I agree that Fake RAID doesn't save you anything over doing software RAID and the downside of using it likely outweighs any touted benefits.





-----Original Message-----
From: ale-bounces at ale.org<mailto:ale-bounces at ale.org> [mailto:ale-bounces at ale.org<mailto:ale-bounces at ale.org>] On Behalf Of Brian Mathis
Sent: Wednesday, November 16, 2011 4:12 PM
To: Atlanta Linux Enthusiasts
Subject: Re: [ale] Onboard RAID

There is a world of difference between "hardware" BIOS RAID and a real
RAID card like a PERC H700.  Please do not throw both of those things
in the same category.  Given the choice between software RAID and BIOS
RAID, software RAID is the only real choice.  However, a real RAID
card will almost always be the best option, if you have one available.

I haven't used Windows software RAID recently, but I think it will be
difficult to get a RAID10 working since the drivers required for
accessing the striped data are themselves striped across the disks,
rendering them unreadable to the system as it boots.  Windows may use
a separate boot partition that is not striped to get around this
issue, but you will have to research that (and I'm not sure a Linux
user group mailing list is the place to find the best answer).  I'm
sure you could test it out in a VM.

As for Windows being completely, horribly sucky sucky, please cut it
out.  A very large portion of the world uses Windows for rather large
file storage on a daily basis, and they don't all constantly crash and
burn.  It may not be your preference, so leave it at that.  Linux has
its own share of problems.

Finally, why do they include BIOS RAID on systems?  Mainly to have a
feature to list on the package.  Incidentally, I don't think I would
buy a board for enterprise usage that has such a feature.  Those are
typically aimed at the enthusiast market.


P.S. Please pay attention to whether the replies you receive are top
or bottom posted and use the same method to continue the conversation.
 I'm not one to care, as long as you are consistent within the same
thread.


❧ Brian Mathis



On Wed, Nov 16, 2011 at 2:53 PM, Greg Clifton <gccfof5 at gmail.com<mailto:gccfof5 at gmail.com>> wrote:
> Well, we have a contract for this system and got hit by the recent hard
> drive price increase, so we don't have any more $ to spend on an additional
> box (and it wasn't my sale or I might have pitched a FreeNAS box). Plus we
> have mucho surplus redundant power in this box. Surely RAID support under
> Server 2008 is way better than running a [I think] non raided NT drive that
> has been running for years now?
> Now as I understand it, all the BIOS options are "fake RAID" and I fully
> appreciate the potential for problems with a [bootable] hardware RAID. I
> always recommend that my customers have a separate mirrored boot drive and
> NOT boot from the storage array. I suppose the same sort of problem could
> result from booting from the fake RAID. The next question is, if it is so
> bad/unreliable, WHY do the BIOSes support the fake RAID in the first place?
> Especially now that we have 3TB and soon will have 4TB hard drives--that
> pretty much does away with the need for RAID for capacity needs for most
> folks, though there is still a demand for striping for faster data access.
>
> On Wed, Nov 16, 2011 at 2:28 PM, Michael B. Trausch <mike at trausch.us<mailto:mike at trausch.us>> wrote:
>>
>> On 11/16/2011 02:12 PM, Greg Clifton wrote:
>> > More details this is a new server (Single Proc Xeon X3440) with only 10
>> > users, so it won't be heavily taxed. Moving the storage to a different
>> > Linux box really isn't an option either. We're replacing an OLD server
>> > running NT with the 2008 server.
>>
>> Depending on the reason why it "isn't an option", it might be worth
>> pushing back on.  The whole point of separating it out is because
>> Windows server sucks, even with only 10 users on it.  The way it
>> operates sucks, the way it treats things on the disks sucks, the overall
>> speed of data access sucks.  Keep a single disk in the Windows server
>> (maybe mirrored) that is the system disk, and put everything else
>> somewhere else.  Don't want a Linux box, then get a RAID array box that
>> hooks up to the Windows box with a single eSATA connection and call it a
>> day.  That is better than having Windows sort it out.
>>
>> > What you are saying is that SOFTWARE is "more better" in all cases than
>> > the BIOS based RAID configuration. OK, but does Server 2008 support RAID
>> > 10? If not, we must rely on the BIOS RAID.
>>
>> And you do NOT want to rely on BIOS RAID.  At all, period, never.  Bad
>> idea, bad call.  I have seen *many* BIOS RAID setups fail for a wide
>> variety of reasons, but most of the time it seems to me that it is
>> because some component of the implementation of them is buggy.  It
>> happens frequently enough that I wouldn't trust hourly snapshotted data
>> on such a storage mechanism, I'll say that much.
>>
>> > If we must do that then the question falls back to which is the better
>> > RAID option [under Windows].
>> > I saw something on some RAID forum that said the Adaptec was for Linux
>> > OS and the Intel for MS OS. Since Adaptec drivers are built into Linux,
>> > that at least makes some sense.
>>
>> Adaptec has drivers for Windows as well.
>>
>> The thing is that with hardware RAID it doesn't matter: you cannot
>> upgrade, you are not portable.  It is a dangerous option.
>>
>> Consider this:  what happens if your disk controller fails?  If that
>> disk controller does RAID, and it has been discontinued, you may be
>> looking at a whole RAID rebuild instead of just a hardware swap-out.  In
>> other words, with hardware RAID, it's far more likely that an outage is
>> going to last forever because you'll have to start over and rebuild the
>> array, restoring data to it.
>>
>> If the thing that fails is a box running Linux with four disks in it,
>> you replace the box and move the disks over and you're done.  If you
>> have a spare box on hand, you can be up in ten minutes.
>>
>> If you *are* going to go the hardware RAID route, make sure you have a
>> spare, identical controller in stock in case of failure.  I've seen it
>> happen where RAID controllers were incompatible after seemingly minor
>> changes (device F00e vs. F00f might be two completely different things,
>> same for F00e and F00e+) to the model number.
>>
>> And just don't use fakeraid (that is, BIOS provided RAID).  It is simply
>> not a viable option if you like uptime and robustness.
>>
>>        --- Mike

_______________________________________________
Ale mailing list
Ale at ale.org<mailto:Ale at ale.org>
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo




Athena®, Created for the Cause™
Making a Difference in the Fight Against Breast Cancer

---------------------------------
CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you.
----------------------------------


_______________________________________________
Ale mailing list
Ale at ale.org<mailto:Ale at ale.org>
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20111117/5a1aed1b/attachment-0001.html 


More information about the Ale mailing list