[ale] [EXTERNAL] Re: Distributed filesystems?
Jim Kinney
jim.kinney at gmail.com
Fri Mar 26 19:36:54 EDT 2021
I'm pretty sure Ceph is still supported by RHEL. I saw the drop notice from SUSE this morning and it looks like it's "further development and installation" is dropped. So there's time.
Sadly, any network filesystem is not "beginner supportable" unless stability is irrelevant.
On March 26, 2021 7:23:43 PM EDT, Allen Beddingfield via Ale <ale at ale.org> wrote:
>Yeah, I've been playing around with both. Neither is going to be
>something I can hand off administration of to a non-Linux geek.
>I'm really wanting something that matches what SUSE advertised their
>CEPH solution to be, which pretty much doesn't exist outside of
>products by the major SAN vendors.
>That would be - something that let's you just smack together a bunch of
>mismatched servers crammed full of disks into an easy to administer NAS
>type of setup, where it just automagically places data in the optimal
>spot, etc... but that pretty much describes the features of our
>Compellent SAN....so not at all realistic.
>
>It looks like Gluster will probably end up doing what we need. What I
>WANT doesn't exist, and that would amount to multi-node FreeNAS!
>--
>Allen Beddingfield
>Systems Engineer
>Office of Information Technology
>The University of Alabama
>Office 205-348-2251
>allen at ua.edu
>
>
>________________________________________
>From: Ale <ale-bounces at ale.org> on behalf of Jim Kinney via Ale
><ale at ale.org>
>Sent: Friday, March 26, 2021 5:28 PM
>To: Atlanta Linux Enthusiasts
>Cc: Jim Kinney
>Subject: [EXTERNAL] Re: [ale] Distributed filesystems?
>
>Moosefs and glusterfs are VERY different. Moose is an object storage
>and gluster is more like raid over ethernet.
>
>Like raid, easy setup of gluster but really hard to change if the needs
>change: think transition between triple redundant raid 1 to a blazing
>fast raid 0. Using hardware raid for intra-node redundancy and then
>maxing out node bandwidth for performance. More nodes means faster
>performance. It works but has some crunchy spots. Dropping IB support
>is a killer.
>
>Moosefs is an object store. A (redundant) head serves metadata and
>node/chunk location for file blocks. Blocks are handled internally for
>desired redundancy. Like gluster, more nodes makes it faster. Big
>caveat is clients need a client tool. Gluster is a nfs/cifs service
>provider.
>
>Neither, in fact no large-scale file server, likes lots of little
>files. Delivering 10,000 1k files is always horrible. Chunking around a
>single 10M file is faster. Getting users to always zip/tar and move
>10,000 files at once never seems to happen.
>
>On March 26, 2021 4:28:09 PM EDT, Allen Beddingfield via Ale
><ale at ale.org> wrote:
>
>Wondering if any of you have experience with distributed filesystems,
>such as CEPH, GlusterFS, MooseFS, etc...?
>We've been using SUSE's "SUSE Enterprise Storage" package, which is
>CEPH, packaged with a Salt installer, and their UI. Anyway, it worked
>well, but was sort of like using a cement block to smash a fly for our
>purposes.
>SUSE notified us yesterday that they are getting out of that business,
>and will EOL the product in two years. I'm glad they let us know
>BEFORE we renewed the maintenance in May.
>That really wasn't that big of a deal for us, because we were about to
>do a clean slate re-install/hardware refresh, anyway.
>Sooo....
>I'm looking into MooseFS and GlusterFS at this point, as they are much
>simpler to deploy and manage (at least to me) compared with CEPH. Do
>any of you have experiences with these? Thoughts?
>The use case is to use CHEAP (think lots of servers full of 10TB SATA
>drives and 1.2TB SAS drives) hardware to share out big NFS (or native
>client) shares as temporary/scratch space, where performance isn't that
>important.
>
>Allen B.
>--
>Allen Beddingfield
>Systems Engineer
>Office of Information Technology
>The University of Alabama
>Office 205-348-2251
>allen at ua.edu
>________________________________
>Ale mailing list
>Ale at ale.org
>https://mail.ale.org/mailman/listinfo/ale
>See JOBS, ANNOUNCE and SCHOOLS lists at
>http://mail.ale.org/mailman/listinfo
>
>--
>Computers amplify human error
>Super computers are really cool
>_______________________________________________
>Ale mailing list
>Ale at ale.org
>https://mail.ale.org/mailman/listinfo/ale
>See JOBS, ANNOUNCE and SCHOOLS lists at
>http://mail.ale.org/mailman/listinfo
--
Computers amplify human error
Super computers are really cool
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20210326/4644151f/attachment.htm>
More information about the Ale
mailing list