[ale] ISCSI array on virtual machine

Lightner, Jeff JLightner at dsservices.com
Thu Apr 28 08:46:09 EDT 2016


1)      I’m a RHEL guy and I don’t use xfs for most purposes but rather continue to use ext4 because it suits our needs.  We do allow the base VolGroup00 LVMs  we use for base filesystems to do xfs on RHEL7 simply because that is the default and we seldom need to change those filesystems.   For apps and dbs though we use ext4.


2)      We use LVM for everything.  The benefit to using LVM is you can increase or decrease LVs on the fly without having to worry about other LVs in the same VG.   With partitioning you have to adjust existing partitions to add new ones.   For a single partition with root on it this would be a concern if you later decided you wanted to decrease that to add another partition.   For multiple partitions it would be even more of a concern because you might have to adjust multiple partitions.


3)      We insulate filesystems by creating multiple LVs.   Typically we have /var, /tmp, /usr, /home and /opt at a minimum as separate filesystems in VolGroup00.   This helps to insure / itself doesn’t get filled up by errant processes as well as insuring the other  filesystems are unique to their purposes.   If we add applications or databases that require significant space we usually put them on their own LVs/filesystems as well.

We use mostly SAN storage for everything except VolGroup00.

I was confused by the original post as it was talking about iSCSI then VMWare.  If the iSCSI attachment is to the VMWare hypervisor doesn’t it just present storage to the virtual guests which is seen as if it were local disks there?  If so you wouldn’t need to do any RAID at guest level as I’d expect it would have been presented to the VMWare hypervisor in a RAID configuration so the space it allocates to guests are already protected by that underlying RAID.   Here we use MS Hyper-V for the most part and that is how it works on those hypervisors but I’d think it was the same from any kind of hypervisor.

From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jim Kinney
Sent: Thursday, April 28, 2016 7:17 AM
To: Atlanta Linux Enthusiasts - Yes! We run Linux!
Subject: Re: [ale] ISCSI array on virtual machine


I have a large drive array for my department. I use LVM to carve it up. I leave a huge chunk unallocated so I can extend logical partitions as required. That dodges the need to shrink existing partitions and allows XFS as filesystem.
On Apr 28, 2016 3:27 AM, "Todor Fassl" <fassl.tod at gmail.com<mailto:fassl.tod at gmail.com>> wrote:
With respect to your question about using LVM ... I guess that was sort of my original question. If I just allocate the whole 8T to one big partition, I'd have no reason to use LVM. But I can see the need to use LVM if I continue with the scheme where I split the drive into partitions for faculty, grads, and staff.

On 04/27/2016 02:27 PM, Jim Kinney wrote:
If you need de-dup, ZFS is the only choice and be ready to throw a lot
of RAM into the server so it can do it's job. I was looking at dedupe
on 80TB and the RAM hit was 250GB.
XFS vs EXT4.
XFS is the better choice.
XFS does everything EXT4 does except shrink. It was designed for (then
very) large files (video) and works quite well with smaller files. It's
as fast as EXT4 but will handle larger files and many, many more of
them. I want to say exabytes but not certain. Petabytes are OK
filesystem sizes with XFS right now. I have no experience with a
filesystem of that size but I expect there to be some level of metadata
performance hit.
If there's the slightest chance of a need to shrink a partition (You
_are_ using LVM, right?) then XFS will bite you and require relocation,
tear down, rebuild, relocation. Not a fun process.
A while back, an install onto a 24 TB RAID6 array refused to budge
using EXT4. While EXT4 is supposed to address that kind of size, it had
bugs and unimplemented plans for expansion features that were blockers.
I used XFS instead and never looked back. XFS has a very complete
toolset for maintenance/repair needs.
On Wed, 2016-04-27 at 13:54 -0500, Todor Fassl wrote:
I need to setup a new file server on a virtual machine with an
attached
ISCSI array. Two things I am obsessing over -- 1. Which file system
to
use and 2. Partitioning scheme.

The ISCSI array is attached to a ubuntu 16.04 virtual machine. To
tell
you the truth, I don't even know how that is done. I do not manage
the
VMware cluster.  In fact, I think the Dell technitian actually ddid
that
for us. It looks like a normal 8T hard drive on /dev/sdb to the
virtual
machine. The ISCSI array is configured for RAID6 so from what I
understand, all I have to do is choose a file system appropriate for
my
end user's needs. Even though the array looks like a single hard
drive,
I don't have to worry about software RAID or anyhthing like that.

Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't
been able to find a page that says any one of those is an obvious
choice
in my situation. I have about 150 end-users with nfs mounted home
directories. We also have a handful of people using Windows so the
file
server will have samba installed. It's a pretty good mix of large
files
and small files since different users are doing drastically
different
things. There are users who never do anything but read email and
browse
the web and others doing fluid dynamic simulations on small
supercomputers.

Secondthing I've been going back and forth on in my own mind is
whether
to do away with seperate partitions for faculty, staff, and grad
students. My co-worker says that's probably an artifact of the days
when
partition sizes were limited. That was before my time here. The last
2
times we rebuilt our file server, we just maintained the
partitioning
scheme and just made the sizes  times larger. But sometimes the
faculty
partition got filled up while there was still plenty of space left
on
the grad partition. Or it might be the other way around. If we
munged
them all together, that wouldn't happen. The only downside I see to
doing that is that if the faculty partition gets hosed, the grad
partition wouldn't be effected. But that seems like a pretty
arbitrary
choice. We could just assign users randomly to one partition or
another.
When you're setting up a NAS for use by a lot of users, is it
considered
best practice to split it up to limit the damage from a messed up
file
system? I mean, hopefully, that never happens anyway, right?

Right now, I've got it configured as one gigantic 8T ext4 partition.
But
we won't be going live with it until the end of May so I have plenty
of
time to completely rebuild it.



_______________________________________________
Ale mailing list
Ale at ale.org<mailto:Ale at ale.org>
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo

--
Todd
_______________________________________________
Ale mailing list
Ale at ale.org<mailto:Ale at ale.org>
http://mail.ale.org/mailman/listinfo/ale
See JOBS, ANNOUNCE and SCHOOLS lists at
http://mail.ale.org/mailman/listinfo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20160428/b3ebcaf4/attachment.html>


More information about the Ale mailing list