[ale] Getting rid of VMware

Jeremy T. Bouse jeremy.bouse at undergrid.net
Fri Mar 12 10:14:34 EST 2021


I ran an ESXi server for my virtual machines on my home network for years
with external NAS devices using iSCSI. While I don't know the full setup
you're looking at there I believe the details may be relevant and help you
out in understanding.

In my experience, the ESXi host server was handling all the iSCSI
communication with the NAS. The storage data store was added to an iSCSI
LUN that ESXi mounted and formatted with its vmfs format. When you would
create a VM guest on ESXi and assign the storage for that VMs device it
would create vmdk files on top of the vmfs filesystem and present that to
the VM as its SCSI drive. The VM was not performing any iSCSI
communication directly back to the NAS.

You can see all that if you use the ESXi vSphere client and browse the data
store and you'll find all the files that make up the VM guest configuration.

It's been my experience that if I wanted to move the VM off ESXi and have
it still use iSCSI for storage that I needed to create a new LUN on the
NAS, create my new server and configure it to mount the iSCSI LUN as a SCSI
device and format it. Then have to handle standard backup/data transfer
process of moving the data between the ESXi VM guest filesystem to the new
LUN. I believe it is possible to actually create an iSCSI LUN on the NAS
and mount it directly to the VM running under ESXi if you install the
necessary dependencies to make the data transfer easier, then just unmount
and mount to the host outside ESXi. Depends if you actually want to attempt
making changes to the existing VM or not.

If as I read you're really only needing the user home directories, I would
agree that building the bare metal with an SSD for the OS and whatever
software needs to be installed is best course and then I'd mount the home
directories from the NAS but you may just want to use NFS instead of iSCSI
for that which would be a much more simple solution.

On Fri, Mar 12, 2021 at 9:35 AM Tod Fassl via Ale <ale at ale.org> wrote:

> I mentioned that I was kind of skeptical about VMWare. The original plan
> was to use the VMWare cluster for research. But I really didn't think
> you could take four 24-core machines and make a 96-core machine out of
> them. There was nothing on google about that. And, at the very least, I
> suggested, you'd need a high speed network to do that and these machines
> are connected via a regular 1G switch.
>
>
> We also have a beowulf cluster for research which supports OpenMPI.
> That's my real job. When I started questioning the wisdom of buying a
> VMWare cluster for research, my boss said it would be fine if I stuck to
> my real job. After it became clear that the original plan wasn't going
> to work, we repurposed the VMWare cluster for administrative tasks --
> file server, database server, etc.
>
>
> We have already pulled three of the four machines out of the cluster. I
> already rebuilt the database server and print server on bare metal. All
> that's left is the file server.
>
>
> PS: Before my former boss retired, I did hint around trying to see if
> he/she remembered me pretty much rebelling at the idea of doing research
> on a VMWare cluster. I didn't want to actually come out and say "I told
> you so." But I'm pretty sure that, no, I did not get credit for that.
>
>
> PPS: VMWare makes you promise not to release benchmarks. I never payed
> any attention to the legalese, what do I care? But I think I can say
> that we were never successful at doing research on virtual machines even
> if they had fewer than 24 cores. We'd create a 16 core vm but the
> researchers found it unsatisfactory.
>
>
> On 3/12/21 7:58 AM, Derek Atkins wrote:
> > HI,
> >
> > iSCSI is supposed to work just like a regular SCSI disk; your computer
> > "mounts" the disk just like it would a locally-connected disk.  The main
> > difference is that instead of the LUN being on a physical wire, the LUN
> is
> > semi-virtual.
> >
> > As for your VM issues...  If you have 4 24-core machines, you might want
> > to consider using something like oVirt to manage it.  It would allow you
> > to turn those machines into a single cluster of cores, so each VM could,
> > theoretically, run up to 24 vCores (although I think you'd be better off
> > with smaller VMs).  However, you will not be able to build a single,
> > 96-core VM out of the 4 boxes.  Sorry.
> >
> > You could also set up oVirt to use iSCSI directly, so no need to "go
> > through a fileserver".
> >
> > -derek
> >
> > On Fri, March 12, 2021 8:47 am, Tod Fassl via Ale wrote:
> >> Yes, I'm in academia. The ISCSI array has 8TB. It's got everybody's home
> >> directory on it. We did move a whole bunch of our stuff to the campus
> >> VMWare cluster. But we have to keep our own file server. And, after all,
> >> we already have the hardware, four 24-core machines, that used to be in
> >> our VMWare cluster.  There's no way we can fail to come out ahead here.
> >> I can easily repurpose those 4 machines to do everything the virtual
> >> machines were doing with plenty of hardware left to spare. And then we
> >> won't have to pay the VMWare licensing fee, upwards of $10K per year.
> >>
> >>
> >> For $10K a year, we can buy another big honkin' machine for the beowulf
> >> research cluster (maintenance of which is my real job).
> >>
> >>
> >> Anyway, the current problem is getting that ISCSI array attached
> >> directly to a Linux file server.
> >>
> >>
> >> On 3/11/21 7:30 PM, Jim Kinney via Ale wrote:
> >>> On March 11, 2021 7:09:06 PM EST, DJ-Pfulio via Ale <ale at ale.org>
> wrote:
> >>>> How much storage is involved?  If it is less than 500G, replace it
> >>>> with an SSD. ;)  For small storage amounts, I wouldn't worry about
> >>>> moving hardware that will be retired shortly.
> >>>>
> >>>> I'd say that bare metal in 2021 is a mistake about 99.99% of the
> >>>> time.
> >>> That 0.01% is my happy spot :-) At some point is must be hardware. As I
> >>> recall, Tob is in academia. So hardware is used until it breaks beyond
> >>> repair.
> >>>
> >>> Why can't I pay for virtual hardware with virtual money? I have a new
> >>> currency called "sarcasm".
> >>>> On 3/11/21 5:37 PM, Tod Fassl via Ale wrote:
> >>>>> Soonish, I am going  to have to take an ISCSI array that is currently
> >>>>> talking to a VMWare virtual machine running Linux and connect it to a
> >>>>> real Linux machine. The problem is that I don't know how the Linux
> >>>>> virtual machine talks to the array. It appears as /dev/sdb on the
> >>>>> Linux virtual machine and is mounted via /etc/fstab like its just a
> >>>>> regular HD on the machine.
> >>>>>
> >>>>>
> >>>>> So I figure some explanation of how we got here is in order. My
> >>>>> previous boss bought VMWare thinking we could take 4 24-core machines
> >>>>> and make one big 96-core virtual machine out of them. He has since
> >>>>> retired. Since I was rather skeptical of VMWare from the start, the
> >>>>> job of dealing with the cluster was given to a co-worker. He has
> >>>>> since moved on. I know just enough about VMWare ESXI to keep the
> >>>>> thing working. My new boss wants to get rid of VMWare and re-install
> >>>>> everything on the bare metal machines.
> >>>>>
> >>>>>
> >>>>> The VMWare host has 4 ethernet cables running to the switch. But
> >>>>> there is only 1 virtual network port on the Linux virtual machine.
> >>>>> However, lspci shows 32 "lines with VMware PCI Express Root"
> >>>>> (whatever that is):
> >>>>>
> >>>>>
> >>>>> # lspci 00:07.7 System peripheral: VMware Virtual Machine
> >>>>> Communication Interface (rev 10) 00:10.0 SCSI storage controller: LSI
> >>>>> Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
> >>>>> (rev 01) 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0 PCI
> >>>>> bridge: VMware PCI Express Root Port (rev 01) [...] 00:18.7 PCI
> >>>>> bridge: VMware PCI Express Root Port (rev 01) 02:00.0 Ethernet
> >>>>> controller: Intel Corporation 82545EM Gigabit Ethernet Controller
> >>>>> (Copper) (rev 01)
> >>>>>
> >>>>>
> >>>>> The open-iscsi package is not installed on the Linux virtual machine.
> >>>>> However, the ISCSI array shows up as /dev/sdb:
> >>>>>
> >>>>> # lsscsi [2:0:0:0]    disk    VMware   Virtual disk     1.0
> >>>>> /dev/sda [2:0:1:0]    disk    EQLOGIC  100E-00          8.1
> >>>>> /dev/sdb
> >>>>>
> >>>>>
> >>>>> I'd kinda like to get the ISCSI array connected to a new bare metal
> >>>>> Linux server w/o losing everybody's files. Do you think I can just
> >>>>> follow the various hotos out there on connecting an ISCSI array w/o
> >>>>> too much trouble?
> >>>>>
> >>>>>
> >>>>>
> >>>>> _______________________________________________ Ale mailing list
> >>>>> Ale at ale.org https://mail.ale.org/mailman/listinfo/ale See JOBS,
> >>>>> ANNOUNCE and SCHOOLS lists at http://mail.ale.org/mailman/listinfo
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> https://mail.ale.org/mailman/listinfo/ale
> >> See JOBS, ANNOUNCE and SCHOOLS lists at
> >> http://mail.ale.org/mailman/listinfo
> >>
> >
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> https://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20210312/e9205cab/attachment.html>


More information about the Ale mailing list