[ale] NAS recommendations

Chuck Payne terrorpup at gmail.com
Thu Jun 15 15:39:45 EDT 2017


I like and bought two iXSystem. I love ZFS so that I can make copies for
the developers to play with destroy once they were done.

On Thu, Jun 15, 2017 at 2:09 PM, DJ-Pfulio <djpfulio at jdpfu.com> wrote:

> "Under load" - think that is the diff.
>
> Took my cheap-ass system 26 hrs to mirror 4TB to a new 4TB 7200rpm disk a
> few
> weeks ago. No RAID. Onboard SATA only. Zero load.
>
> Look for the SELF videos when they are posted to get passed my summary.
>
> BTW, I'm loving all the different, thoughtful, opinions on this subject
> shared.
> Very nice community!
>
>
> On 06/15/2017 01:16 PM, Jim Kinney wrote:
> > Wow! A six month recovery time! I've not had any of my RAID6 systems
> take longer
> > than 10 days with pretty heavy use. These are 4TB SAS drives with 28
> drives per
> > array.
> >
> > On Jun 15, 2017 5:08 PM, "DJ-Pfulio" <DJPfulio at jdpfu.com
> > <mailto:DJPfulio at jdpfu.com>> wrote:
> >
> >     On 06/15/2017 09:29 AM, Ken Cochran wrote:
> >     > Any ALEr Words of Wisdom wrt desktop NAS?
> >     > Looking for something appropriate for, but not limited to,
> photography.
> >     > Some years ago Drobo demoed at (I think) AUUG.  (Might've been
> ALE.)
> >     > Was kinda nifty for the time but I'm sure things have improved
> since.
> >     > Synology?  QNAP?
> >     > Build something myself?  JBOD?
> >     > Looks like they all running Linux inside these days.
> >     > Rackmount ones look lots more expensive.
> >     > Ideas?  What to look for?  Stay away from?  Thanks, Ken
> >
> >     Every time I look at the pre-built NAS devices, I think - that's $400
> >     too much and not very flexible. These devices are certified with
> >     specific models of HDDs. Can you live with a specific list of
> supported
> >     HDDs and limited, specific, software?
> >
> >     Typical trade off - time/convenience vs money.  At least initially.
> >     Nothing you don't already know.
> >
> >     My NAS is a $100 x86 box built from parts.  Bought a new $50 intel
> G3258
> >     CPU and $50 MB. Reused stuff left over from prior systems for
> everything
> >     else, at least initially.
> >     Reused:
> >     * 8G of DDR3 RAM
> >     * Case
> >     * PSU
> >     * 4TB HDD
> >     * assorted cabled to connect to a KVM and network.  That was 3 yrs
> ago.
> >
> >     Most of the RAM is used for disk buffering.
> >
> >     That box has 4 internal HDDs and 4 external in a cheap $99 array
> >     connected via USB3. Internal is primary, external is the rsync mirror
> >     for media files.
> >
> >     It runs Plex MS, Calibre, and 5 other services. The CPU is powerful
> >     enough to transcode 2 HiDef streams for players that need it
> concurrently.
> >     All the primary storage is LVM managed. I don't span HDDs for LVs.
> >     Backups are not LVM'd and a simple rsync is used for media files.  OS
> >     application and non-media content gets backed up with 60 versions
> using
> >     rdiff-backup to a different server over the network.
> >
> >     That original 4TB disk failed a few weeks ago. It was a minor
> >     inconvenience.  Just sayin'.
> >
> >     If I were starting over, the only thing I'd do different would be to
> >     more strongly consider ZFS. Don't know that I'd use it, but it would
> be
> >     considered for more than 15 minutes for the non-OS storage.  Bitrot
> is
> >     real, IMHO.
> >
> >     I use RAID elsewhere on the network, but not for this box.  It is
> just a
> >     media server (mainly), so HA just isn't needed.
> >
> >     At SELF last weekend, there was a talk about using RAID5/6 on HDDs
> over
> >     2TB in size by a guy in the storage biz.  The short answer was -
> don't.
> >
> >     The rebuild time after a failure in their testing was measured in
> >     months. They were using quality servers, disks and HBAs for the
> test. A
> >     5x8TB RAID5 rebuild was predicted to finish in over 6 months under
> load.
> >
> >     There was also discussions about whether using RAID with SSDs was
> smart
> >     or not.  RAID10 was considered fine. RAID0 if you needed performance,
> >     but not for long term. The failure rate on enterprise SSDs is so low
> to
> >     make it a huge waste of time except for the most critical
> applications.
> >     They also suggested avoiding SAS and SATA interfaces on those SSDs to
> >     avoid the limited performance.
> >
> >     Didn't mean to write a book. Sorry.
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>



-- 
Terror PUP a.k.a
Chuck "PUP" Payne
-----------------------------------------
Discover it! Enjoy it! Share it! openSUSE Linux.
-----------------------------------------
openSUSE -- Terrorpup
openSUSE Ambassador/openSUSE Member
skype,twiiter,identica,friendfeed -- terrorpup
freenode(irc) --terrorpup/lupinstein
Register Linux Userid: 155363

Have you tried SUSE Studio? Need to create a Live CD,  an app you want to
package and distribute , or create your own linux distro. Give SUSE Studio
a try.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20170615/cf91c408/attachment.html>


More information about the Ale mailing list