[ale] DIY NAS vs Boxed NAS?

Jim Kinney jim.kinney at gmail.com
Sun Dec 2 10:11:26 EST 2018


Bear in mind that every byte served passes through the cpu and the pci bus twice. Often the bottle neck is the IO on the drive. But once a sizeable controller is in use, the cpu becomes critical. 

I tend to size cpu to allow 1 core per client served plus 1 core for overhead as long as budget will allow. But I deal with stuff larger than home use.

For home use, with 3-5 users, network bandwidth will also be an issue. If the drives are used in parallel as stripes, be sure to size the joined stripe partitions so different uses will hit different drives. Size the total drive count per controller to be 3-5 times the combined drive IO bandwidth of the bus to allow cache flush and seek time. I would rather migrate a drive array to a new host with more bandwidth than migrate data to a new drive array.

On December 1, 2018 11:52:33 AM EST, Alex Carver via Ale <ale at ale.org> wrote:
>Lots of votes for DIY NAS.
>
>Assuming that I choose that route, I'd be aiming for relatively low
>cost
>(not including the cost of the drives, that's a sunk cost no matter the
>array).  To this end I don't need a machine that can transcode video,
>run fifteen application servers, VMs, or much of anything else.  I just
>need a box that can handle SMB/CIFS/NFS for file storage from remote
>machines (mix of *x, Windows, Mac), can run rdiff-backup over ssh (some
>of my smaller machines back up using rdiff-backup for simplicity), can
>send me an email if something is wrong, has two or more Gigabit ports
>so
>I can divide network streams (one coming from cameras on a VLAN, the
>other coming from the other machines), and the ability to support
>plenty
>of drives without much extra hardware (at least four plus an OS drive
>without needing a SATA card, more SATA ports is better though).
>
>I wanted to avoid hyperexpensive motherboards.  I did some searching
>after all the input on this thread came in and most of the build guides
>for DIY NAS boxes max out the system so much so that you can run Plex,
>Xen, an email server, an IoT server, cloud synchronization and like
>fifteen other things, none of which I want.  I just want a giant file
>bucket.  I want to send big files/backups to the machine and, in a
>reasonable amount of time, have those files stored to disk and done. 
>At
>the same time, that much horsepower is also using a lot of electricity
>so minimizing that load would be great if I don't actually need it.
>That simplifies cooling as well as I'd be able to use passive cooling
>or
>slow fans.
>
>The build guides were using things like $600-$1000 motherboards from
>Supermicro and such that had 10 GbE ports, one had SFP slots for fiber,
>another used a Core i7 processor and 128 GB of RAM, one even had a
>Radeon graphics card in it.  Half of them used over 100 Watts idle with
>a significant chunk going to the motherboard rather than the drives.
>Surely a simple file server does not need nearly that much horsepower
>to
>take data from an Ethernet port and shove it through a SATA port to a
>disk.  The most taxing application for this thing would be continuously
>recording multiple camera streams using H.264 (around 100-200 kBps on
>average) or MJPEG (500-600 kBps) to disk over one of its ports.
>
>So for those of you that did DIY, how much horsepower did you seek out
>for the system and how little can I get away with for the most basic
>file serving application without drastically harming performance?
>_______________________________________________
>Ale mailing list
>Ale at ale.org
>https://mail.ale.org/mailman/listinfo/ale
>See JOBS, ANNOUNCE and SCHOOLS lists at
>http://mail.ale.org/mailman/listinfo

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20181202/85e421f4/attachment.html>


More information about the Ale mailing list