[ale] Getting rid of VMware

Jim Kinney jim.kinney at gmail.com
Fri Mar 12 23:19:18 EST 2021


Ovirt can use that iscsi directly and you use virt-v2v to change the vmware to kvm storage.

But that iscsi is only 8TB. Unless it's got multiple 10Gb ethernet it would make more sense to get a new box with several 10TB drives in raid 10, use iscsi tools on the new box, mount the array and virt-v2v onto new drives.

I've not poked it in use but the iscsi kernel parts are installed by default in Centos. All that's left are userspace tools.

On March 12, 2021 8:58:56 AM EST, Derek Atkins via Ale <ale at ale.org> wrote:
>HI,
>
>iSCSI is supposed to work just like a regular SCSI disk; your computer
>"mounts" the disk just like it would a locally-connected disk.  The
>main
>difference is that instead of the LUN being on a physical wire, the LUN
>is
>semi-virtual.
>
>As for your VM issues...  If you have 4 24-core machines, you might
>want
>to consider using something like oVirt to manage it.  It would allow
>you
>to turn those machines into a single cluster of cores, so each VM
>could,
>theoretically, run up to 24 vCores (although I think you'd be better
>off
>with smaller VMs).  However, you will not be able to build a single,
>96-core VM out of the 4 boxes.  Sorry.
>
>You could also set up oVirt to use iSCSI directly, so no need to "go
>through a fileserver".
>
>-derek
>
>On Fri, March 12, 2021 8:47 am, Tod Fassl via Ale wrote:
>> Yes, I'm in academia. The ISCSI array has 8TB. It's got everybody's
>home
>> directory on it. We did move a whole bunch of our stuff to the campus
>> VMWare cluster. But we have to keep our own file server. And, after
>all,
>> we already have the hardware, four 24-core machines, that used to be
>in
>> our VMWare cluster.  There's no way we can fail to come out ahead
>here.
>> I can easily repurpose those 4 machines to do everything the virtual
>> machines were doing with plenty of hardware left to spare. And then
>we
>> won't have to pay the VMWare licensing fee, upwards of $10K per year.
>>
>>
>> For $10K a year, we can buy another big honkin' machine for the
>beowulf
>> research cluster (maintenance of which is my real job).
>>
>>
>> Anyway, the current problem is getting that ISCSI array attached
>> directly to a Linux file server.
>>
>>
>> On 3/11/21 7:30 PM, Jim Kinney via Ale wrote:
>>>
>>> On March 11, 2021 7:09:06 PM EST, DJ-Pfulio via Ale <ale at ale.org>
>wrote:
>>>> How much storage is involved?  If it is less than 500G, replace it
>>>> with an SSD. ;)  For small storage amounts, I wouldn't worry about
>>>> moving hardware that will be retired shortly.
>>>>
>>>> I'd say that bare metal in 2021 is a mistake about 99.99% of the
>>>> time.
>>> That 0.01% is my happy spot :-) At some point is must be hardware.
>As I
>>> recall, Tob is in academia. So hardware is used until it breaks
>beyond
>>> repair.
>>>
>>> Why can't I pay for virtual hardware with virtual money? I have a
>new
>>> currency called "sarcasm".
>>>> On 3/11/21 5:37 PM, Tod Fassl via Ale wrote:
>>>>> Soonish, I am going  to have to take an ISCSI array that is
>currently
>>>>> talking to a VMWare virtual machine running Linux and connect it
>to a
>>>>> real Linux machine. The problem is that I don't know how the Linux
>>>>> virtual machine talks to the array. It appears as /dev/sdb on the
>>>>> Linux virtual machine and is mounted via /etc/fstab like its just
>a
>>>>> regular HD on the machine.
>>>>>
>>>>>
>>>>> So I figure some explanation of how we got here is in order. My
>>>>> previous boss bought VMWare thinking we could take 4 24-core
>machines
>>>>> and make one big 96-core virtual machine out of them. He has since
>>>>> retired. Since I was rather skeptical of VMWare from the start,
>the
>>>>> job of dealing with the cluster was given to a co-worker. He has
>>>>> since moved on. I know just enough about VMWare ESXI to keep the
>>>>> thing working. My new boss wants to get rid of VMWare and
>re-install
>>>>> everything on the bare metal machines.
>>>>>
>>>>>
>>>>> The VMWare host has 4 ethernet cables running to the switch. But
>>>>> there is only 1 virtual network port on the Linux virtual machine.
>>>>> However, lspci shows 32 "lines with VMware PCI Express Root"
>>>>> (whatever that is):
>>>>>
>>>>>
>>>>> # lspci 00:07.7 System peripheral: VMware Virtual Machine
>>>>> Communication Interface (rev 10) 00:10.0 SCSI storage controller:
>LSI
>>>>> Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI
>>>>> (rev 01) 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0
>PCI
>>>>> bridge: VMware PCI Express Root Port (rev 01) [...] 00:18.7 PCI
>>>>> bridge: VMware PCI Express Root Port (rev 01) 02:00.0 Ethernet
>>>>> controller: Intel Corporation 82545EM Gigabit Ethernet Controller
>>>>> (Copper) (rev 01)
>>>>>
>>>>>
>>>>> The open-iscsi package is not installed on the Linux virtual
>machine.
>>>>> However, the ISCSI array shows up as /dev/sdb:
>>>>>
>>>>> # lsscsi [2:0:0:0]    disk    VMware   Virtual disk     1.0
>>>>> /dev/sda [2:0:1:0]    disk    EQLOGIC  100E-00          8.1
>>>>> /dev/sdb
>>>>>
>>>>>
>>>>> I'd kinda like to get the ISCSI array connected to a new bare
>metal
>>>>> Linux server w/o losing everybody's files. Do you think I can just
>>>>> follow the various hotos out there on connecting an ISCSI array
>w/o
>>>>> too much trouble?
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________ Ale mailing list
>>>>> Ale at ale.org https://mail.ale.org/mailman/listinfo/ale See JOBS,
>>>>> ANNOUNCE and SCHOOLS lists at http://mail.ale.org/mailman/listinfo
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> https://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
>>
>
>
>-- 
>       Derek Atkins                 617-623-3745
>       derek at ihtfp.com             www.ihtfp.com
>       Computer and Internet Security Consultant
>
>_______________________________________________
>Ale mailing list
>Ale at ale.org
>https://mail.ale.org/mailman/listinfo/ale
>See JOBS, ANNOUNCE and SCHOOLS lists at
>http://mail.ale.org/mailman/listinfo

-- 
Computers amplify human error
Super computers are really cool
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20210312/d126754d/attachment.html>


More information about the Ale mailing list