[ale] They say drives fail in pairs...

Jim Kinney jim.kinney at gmail.com
Thu Jan 5 07:50:11 EST 2012


I use chaos cats for random text entry. It's not at thorough as chaos
college students. My first major app survived chaos cats but only lasted 40
seconds in the hands of 24 disinterested college students. Apparently cats
don't play as much with ctl alt f keys...
On Jan 5, 2012 4:20 AM, "Richard Bronosky" <Richard at bronosky.com> wrote:

> This is a great writeup by Alfredo Deza, one of my coworkers.
> http://www.ibm.com/developerworks/library/os-zfsraidz/index.html
>
> He did a lunch and learn at my company where he started with a common
> VirtualBox OVA of OpenIndiana (that's the community fork of Open
> Solaris) and added a bunch of disks, set up zfs, generated some data,
> then started playing Chaos Monkey (worth a Googling) with the drives.
> It was 1 hour of very convincing presentation.
>
> On Tue, Jan 3, 2012 at 6:29 PM, Michael Trausch <mike at trausch.us> wrote:
> > On 01/03/2012 04:52 PM, Lightner, Jeff wrote:
> >> That confuses me.  Does ZFS have built in redundancy of some sort
> >> that would obviate the need for the underlying storage to be hardware
> >> RAID?  Or are you saying you'd use ZFS rather than Software RAID?
> >
> > Both ZFS and btrfs have redundancy capabilities built-in that
> > (allegedly!) play nicely with the filesystem's built-in dynamic resizing
> > volume management stuff.  Neither filesystem is "just" a filesystem, but
> > aims to be a whole volume-management stack.  No more need for things
> > like LVM, when all you need to do is create an fs on a single whole
> > drive (no partition table) and hot-add or hot-remove it from the pool of
> > storage.
> >
> > The other nifty thing is that they can do redundant data storage on even
> > a single device, as I understand it, so that you can do things like have
> > the same data on a single drive in multiple locations, which helps if
> > one area of the drive goes bad.
> >
> > I don't use hardware RAID for anything (and I'm not likely to ever do
> > so).  If I ever needed storage that went beyond what a few hard disks
> > could provide, or something that needed to be larger than what I would
> > trust something like ZFS or btrfs to do on their own, I would probably
> > build a dedicated rack-mount box that had tens of drives in it and use
> > something like RAID 10 with three stripes.
> >
> > There was a "DIY" guide to building such a box, along with lists of
> > hardware and tools needed to build the things, and claiming something
> > like 100+ TB of storage in a single box.  They're expensive in absolute
> > dollars, but relatively inexpensive compared to other solutions that
> > scale that far up in storage space, and they are powered by Linux
> > software RAID (AFAIK).  You would use the things such that you could
> > replace standalone failed drives off-line, and replace whole units in
> > (ideally) only as long as it takes to power one down and install a new
> one.
> >
> > I'm not anywhere near that, yet, though.  I can only really forsee
> > needing to grow to about 6 TB of reliable storage in the next two years,
> > but given the high rates of change in everything around me at the
> > moment, I can't really look much farther than that.
> >
> >        --- Mike
> >
> > --
> > A man who reasons deliberately, manages it better after studying Logic
> > than he could before, if he is sincere about it and has common sense.
> >                                   --- Carveth Read, “Logic”
> >
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> > http://mail.ale.org/mailman/listinfo/ale
> > See JOBS, ANNOUNCE and SCHOOLS lists at
> > http://mail.ale.org/mailman/listinfo
> >
>
>
>
> --
> .!# RichardBronosky #!.
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20120105/d44c9f22/attachment-0001.html 


More information about the Ale mailing list