[ale] Getting rid of VMware
DJ-Pfulio
DJPfulio at jdpfu.com
Fri Mar 12 11:47:33 EST 2021
Posted too quick. Would agree that bonded NICs is much more likely.
The ARP scan should help confirm that. All the major Linux distros
support bonding ethernet links. Easier in some than others.
I deployed iSCSI for a VMware setup about a decade ago. It was easy.
Need that because most of the VMs at the client were Windows and we
were replacing about 20 out-of-support servers with 2 new boxes
running VMware ESXi. It also lowered their power use and cooling and
UPS needs for the building drastically.
I never liked the backup solution we had to use due to VMware. A
straight Linux OS with KVM provides so much more flexibility without
being tied to paid licenses. But the client wanted to pay, so we
let them pay. Around 2011, I retired the last VMware from my home
lab, retired Xen and switched to KVM for everything. I've never, ever,
regretted that specific choice.
With KVM, you can mix and match at the level you want for very little
risk. I run containers, VMs, and some direct-on-hardware stuff all on
the same systems. Which gets used depends on the level of isolation
needed. Flexibility.
On 3/12/21 11:35 AM, DJ-Pfulio via Ale wrote:
> On 3/12/21 9:08 AM, Tod Fassl via Ale wrote:
>>
>> The virtual machine that is acting as a file serveris running on a
>> ESXI host that has 6 ethernet cables connected to it. But it looks
>> like most of the ports aren't even active. I would *assume* I can
>> safely remove those cables. But why the heck are they there in the
>> first place?
>
> It is there because the last 4 guys were afraid like you. Schedule
> some maintenance downtime, label the cables carefully, and pull them
> one at a time while someone checks which storage is impacted. May
> want to run an ARP/scan against the subnet to see if you can easily
> trace back the MAC -to- IP relationship.
More information about the Ale
mailing list