[ale] Needing to cut up server disk space

Jeff Hubbs jhubbslist at att.net
Thu Sep 24 23:54:01 EDT 2015


I appreciate all the responses.

So I guess what I'm hearing is 1) get over my HW RAID hate, RAID5 the 
lot using the PERC, and slice and dice with LVM or 2) forgo RAID 
altogether, use the PERC to make some kind of "appended" 2TB volume, and 
slice and dice with LVM. I'm willing to give up some effective space to 
not have a dead box if a drive fails; just because it's a lab machine 
doesn't mean people won't be counting on it. I'm okay with that as long 
as I have a way to sense a drive failure flagged by the PERC.



On 9/24/15 7:27 AM, Solomon Peachy wrote:
> On Wed, Sep 23, 2015 at 11:42:37PM -0400, Jeff Hubbs wrote:
>>   * I really dislike hardware RAID cards like Dell PERC. If there has to
>>     be one, I would much rather set it to JBOD mode and get my RAIDing
>>     done some other way.
> There's a big difference between "hardware" RAID (aka fakeRAID) and real
> hardware RAID boards.  The former are the worst of both worlds, but the
> latter are the real deal.
>
> In particular, the various Dell PERC RAID adapters are excellent, fast,
> and highly reliable, with full native linux support for managing them.
>
> Strictly speaking you'll end up with more flexibility going the JBOD
> route, but you're going to lose both performance and reliability versus
> the PERC.
>
> (for example, what happens if the "boot" drive fails?  Guess what, your
>   system is no longer bootable with the JBOD, but the PERC will work just
>   fine)
>
>>   * I foresee I will have gnashing of teeth if I set in stone at install
>>     time the sizes of the /var and /home volumes. There's no telling how
>>     much or how little space PostgreSQL might need in the future and you
>>     know how GRAs are - give them disk space and they'll take disk space. :)
> You're not talking about much space here; only 5*400 == 2TB of raw
> space, going down to 1.6TV by the time the RAID5 overhead is factored
> in.  Just create a single 2TB filesystem and be done with it.
>
> FWIW, If you're after reliability I'd caution against btrfs, and instead
> recommend XFS -- and make sure the system is plugged into a UPS.  No
> matter what, be sure to align the partition and filesystem with the
> block/stripe sizes of the RAID setup.
>
> (The system I'm typing this on has about ~10TB of XFS RAID5 filesystems
>   hanging off a 3ware 9650 card, plus a 1TB RAID1 for the OS)
>
>   - Solomon
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20150924/7f1fa83f/attachment.html>


More information about the Ale mailing list