[ale] NAS recommendations

Raj Wurttemberg rajaw at c64.us
Fri Jun 16 09:17:31 EDT 2017


That was my thought also... if the motherboard dies it will be a real pain
in the backside. I don't have any benchmarks but I would imagine that the
throughput on the RAID card would be faster than the motherboard SATA.  Of
course there is cost too. It seems that quite a few motherboards support six
or more SATA connections. With FreeNAS and ZFS, that would make for a very
inexpensive NAS.

/Raj

-----Original Message-----
From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jeff
Hubbs
Sent: Friday, June 16, 2017 12:50 AM
To: ale at ale.org
Subject: Re: [ale] NAS recommendations

I still have my TO materials for the server I built and I had to consult
them - it's been seven years. :)

I used two 3ware SAS cards for the 14 faster/smaller drives and a 3ware SATA
card for eight slower/bigger drives. The production shares were on the SAS
drives and the SATA drives were for offlining and processing backups. All
the 3ware cards were JBOD and mdraid handled everything. 
The Supermicro mobo had six SATA ports; two went to 80GB drives on the front
panel that were used for swap and /var and two went to SSDs inside the case
that held /boot, /auxboot/, and root in an RAID1. The SAS drives were paired
off, one per controller, made into RAID1, then all seven of those pairs were
made into a RAID0. The splitting was so that I could lose an entire SAS
controller and the md volume would just keep on running.

Using kernel raid and controllers in JBOD mode meant that if the primary
machine's mobo *did* fry, I could just slam 22 disks from its front panel
into the secondary, boot it up, change the IP address, and carry on.
Pointy-hairs tend to want to only buy things with service contracts...well,
I've seen many cases where the practical reality of having some tech figure
out where your office is, row through traffic, and show up with a
motherboard that turns out to be the wrong one so he has to go back to the
shop, etc., etc. - once even server-grade hardware got commoditized and
fairly well standardized, the DEC/HP/IBM/Sun/SGI/Dell way of dealing with
breakage just didn't make sense. I know some people will go there anyway but
my public-sector work ethic of committing to get the most value for the
least money (later adding "under the least restrictive conditions" once I
went Open Source) never left me.

Oh, and I also stuffed a high-end dual gig-E NIC in a slot (bonded) and used
that for production. Much like with the disk controllers, you want to avoid
depending on mobo functions where you can; I'd much rather replace a
lightning-fried NIC than a motherboard.

On 6/15/17 10:21 PM, Raj Wurttemberg wrote:
> For those of you using custom NAS boxes, are you using a PCI RAID 
> controller or are you using the motherboard SATA?
>
> /Raj
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at 
> http://mail.ale.org/mailman/listinfo
>

_______________________________________________
Ale mailing list
Ale at ale.org
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo



More information about the Ale mailing list