[ale] ISCSI array on virtual machine

Todor Fassl fassl.tod at gmail.com
Wed Apr 27 16:15:41 EDT 2016


I intend to use the old ISCSI array to backup the new one.  Actually 
both the old and new arrays are only about 50% user space. VMware has 
the other 50%. But we'll move all the VMware data stores to the new 
array and then use the entire old array as a backup. What I do is to use 
rsync to backup one array to the other one and then use amanda to make a 
backup of the backup  on virtual tapes in another building.  If someone 
wants me to undelete a file, I just copy it backfrom the old ISCSI 
array. If they want something put back the way it was last week, I have 
to go to the vtapes.

On 04/27/2016 02:19 PM, DJ-Pfulio wrote:
> a) I always use LVM on block storage.  Resizing up/down is trivial,
> provided you don't use xfs. Doesn't sound like you **need** xfs, so I'd
> go with ext4. If they are doing RAID6, that means the primary reason I'd
> use ZFS is removed.  There are other reasons for ZFS, but you really
> need to HAVE A REASON to use it, IMHO. The built-in CIFS server can be
> nice, but that wouldn't be enough reason for me. ;)
>
> b) LV sizes are set based on what is easy to backup.  4T is the backup
> size limit here, so no LV is over about 3.5T in size. You might have
> similar drivers or not.
>
> I wouldn't be afraid of ZFS on Ubuntu x64. The x32 stuff, was always
> scary to me.
>
> On 04/27/16 14:54, Todor Fassl wrote:
>> I need to setup a new file server on a virtual machine with an attached
>> ISCSI array. Two things I am obsessing over -- 1. Which file system to
>> use and 2. Partitioning scheme.
>>
>> The ISCSI array is attached to a ubuntu 16.04 virtual machine. To tell
>> you the truth, I don't even know how that is done. I do not manage the
>> VMware cluster.  In fact, I think the Dell technitian actually ddid that
>> for us. It looks like a normal 8T hard drive on /dev/sdb to the virtual
>> machine. The ISCSI array is configured for RAID6 so from what I
>> understand, all I have to do is choose a file system appropriate for my
>> end user's needs. Even though the array looks like a single hard drive,
>> I don't have to worry about software RAID or anyhthing like that.
>>
>> Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't
>> been able to find a page that says any one of those is an obvious choice
>> in my situation. I have about 150 end-users with nfs mounted home
>> directories. We also have a handful of people using Windows so the file
>> server will have samba installed. It's a pretty good mix of large files
>> and small files since different users are doing drastically different
>> things. There are users who never do anything but read email and browse
>> the web and others doing fluid dynamic simulations on small supercomputers.
>>
>> Secondthing I've been going back and forth on in my own mind is whether
>> to do away with seperate partitions for faculty, staff, and grad
>> students. My co-worker says that's probably an artifact of the days when
>> partition sizes were limited. That was before my time here. The last 2
>> times we rebuilt our file server, we just maintained the partitioning
>> scheme and just made the sizes  times larger. But sometimes the faculty
>> partition got filled up while there was still plenty of space left on
>> the grad partition. Or it might be the other way around. If we munged
>> them all together, that wouldn't happen. The only downside I see to
>> doing that is that if the faculty partition gets hosed, the grad
>> partition wouldn't be effected. But that seems like a pretty arbitrary
>> choice. We could just assign users randomly to one partition or another.
>> When you're setting up a NAS for use by a lot of users, is it considered
>> best practice to split it up to limit the damage from a messed up file
>> system? I mean, hopefully, that never happens anyway, right?
>>
>> Right now, I've got it configured as one gigantic 8T ext4 partition. But
>> we won't be going live with it until the end of May so I have plenty of
>> time to completely rebuild it.
>>
>
>

-- 
Todd


More information about the Ale mailing list