[ale] NAS recommendations

Greg Clifton gccfof5 at gmail.com
Fri Jun 16 15:08:43 EDT 2017


Raj,

I doubt HBA connected drives would have any higher throughput than
motherboard connected ones. If actually running any parity based version of
RAID, I'm SURE that the motherboard embedded controller + software RAID
would be faster than hardware based RAID. In general there is much more
surplus compute power on modern CPUs than on any RAID manufacturer's
proprietary RAID controller chip.

Pardon the free advertisement for Supermicro (I do not work for them but
have used them for most of our server builds for years), but In fact, many
of their motherboards support up to 10 SATA drives (even up to 6 or 8 on
uATX) and some also support up to 8 SAS. Plus, many of their boards have a
7 year life cycle. With even most "server class" single processor
motherboards at less than $400, a less costly option than using HW
controllers would be to simply build the NAS system with a quality
motherboard and buy a spare motherboard when you build. That way should
something fail, it is a relatively trivial matter to swap out the MB and
everything should go back to working with minimal tweaks of MAC address and
such. Barring a lightening strike or other major power surge, you would
probably get many years service out of such a system without ever replacing
the MB.

As a side note we ran a 3Ware IDE RAID system on a Tyan motherboard in our
own 2U rack chassis at CCSI for well over 10 years until the second or
third round of drives failing (originally 80MB then 250MB). We
decommissioned it when we couldn't get newer drives to work with the old
3Ware controller, but otherwise the system was still fully functional. All
that to say, "make a little noise" so as to provide plenty of cooling in
the enclosure and even an inexpensive home brew NAS can provide many years
of service.

On Fri, Jun 16, 2017 at 9:17 AM, Raj Wurttemberg <rajaw at c64.us> wrote:

> That was my thought also... if the motherboard dies it will be a real pain
> in the backside. I don't have any benchmarks but I would imagine that the
> throughput on the RAID card would be faster than the motherboard SATA.  Of
> course there is cost too. It seems that quite a few motherboards support
> six
> or more SATA connections. With FreeNAS and ZFS, that would make for a very
> inexpensive NAS.
>
> /Raj
>
> -----Original Message-----
> From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jeff
> Hubbs
> Sent: Friday, June 16, 2017 12:50 AM
> To: ale at ale.org
> Subject: Re: [ale] NAS recommendations
>
> I still have my TO materials for the server I built and I had to consult
> them - it's been seven years. :)
>
> I used two 3ware SAS cards for the 14 faster/smaller drives and a 3ware
> SATA
> card for eight slower/bigger drives. The production shares were on the SAS
> drives and the SATA drives were for offlining and processing backups. All
> the 3ware cards were JBOD and mdraid handled everything.
> The Supermicro mobo had six SATA ports; two went to 80GB drives on the
> front
> panel that were used for swap and /var and two went to SSDs inside the case
> that held /boot, /auxboot/, and root in an RAID1. The SAS drives were
> paired
> off, one per controller, made into RAID1, then all seven of those pairs
> were
> made into a RAID0. The splitting was so that I could lose an entire SAS
> controller and the md volume would just keep on running.
>
> Using kernel raid and controllers in JBOD mode meant that if the primary
> machine's mobo *did* fry, I could just slam 22 disks from its front panel
> into the secondary, boot it up, change the IP address, and carry on.
> Pointy-hairs tend to want to only buy things with service contracts...well,
> I've seen many cases where the practical reality of having some tech figure
> out where your office is, row through traffic, and show up with a
> motherboard that turns out to be the wrong one so he has to go back to the
> shop, etc., etc. - once even server-grade hardware got commoditized and
> fairly well standardized, the DEC/HP/IBM/Sun/SGI/Dell way of dealing with
> breakage just didn't make sense. I know some people will go there anyway
> but
> my public-sector work ethic of committing to get the most value for the
> least money (later adding "under the least restrictive conditions" once I
> went Open Source) never left me.
>
> Oh, and I also stuffed a high-end dual gig-E NIC in a slot (bonded) and
> used
> that for production. Much like with the disk controllers, you want to avoid
> depending on mobo functions where you can; I'd much rather replace a
> lightning-fried NIC than a motherboard.
>
> On 6/15/17 10:21 PM, Raj Wurttemberg wrote:
> > For those of you using custom NAS boxes, are you using a PCI RAID
> > controller or are you using the motherboard SATA?
> >
> > /Raj
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://mail.ale.org/mailman/listinfo/ale
> > See JOBS, ANNOUNCE and SCHOOLS lists at
> > http://mail.ale.org/mailman/listinfo
> >
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20170616/f408c05b/attachment.html>


More information about the Ale mailing list