[ale] Getting rid of VMware

Tod Fassl fassl.tod at gmail.com
Fri Mar 12 11:47:23 EST 2021


Well, keep in mind that we're running our own VMWare cluster (for now). 
We still have bare metal, its just that it's running ESXI instead of 
Linux. I don't see how you can ever come out ahead running your own 
VMWare cluster. First, you have the trouble and expense of VMWare on top 
of everything else.


Then there's this other point. Now that our database server is running 
on bare metal, it has access to all 24 cores and all 48Gb of ram. I can 
fine tune it to use all 24 cores and all 48Gb of ram. I am not sure it's 
even possible to create a virtual machine that has 24 cores and 48Gb of 
ram on an ESXI host that has 24 cores and 48Gb of ram. But even if I 
did, it'd have to share it with VMWare ESXI itself. You can never come 
out ahead. Well, maybe you say, what if you want to create a database 
server with 16 cores and a print server with 4 cores. That's only 20, 
now you're good. But I can create a Linux host, install mariadb and cups 
on it and again have access to all 24 cores.


I know there are some advantages -- snapshots, vmotion. But its just not 
worth it.


Now, running your vm on someone else's cluster -- that I can understand. 
But running your own? No, that's dumb.



On 3/12/21 9:50 AM, James Taylor via Ale wrote:
> I've been living in a virtual server world for a couple of decades, so I associate dedicated bare metal servers with a lot of pain and suffering, but everyone's needs are different.
>
> I would agree that you would need to just do a file transfer to move  your data to a new nfs lun. Or you could more reasonably get a couple of SSD's and put them in the server itself, since you won't need shared storage.
> I'm assuming you can mirror the drives locally for redundancy...
>
> iSCSI is a pain at best and, as mentioned, everything will be embedded in vmfs in any case, so a direct lun migration would be impractical if not impossible.
> I use SUSE Linux Enterprise Server (SLES) for my core systems, and network link aggregation is trivial. I assume it would be the same for any other linux distro.
> -jt
>
>   
>
> James Taylor
> 678-697-9420
> james.taylor at eastcobbgroup.com
>
>
>
>>>> "Jeremy T. Bouse via Ale" <ale at ale.org> 3/12/2021 10:14 AM >>>
> It's been my experience that if I wanted to move the VM off ESXi and have it still use iSCSI for storage that I needed to create a new LUN on the NAS, create my new server and configure it to mount the iSCSI LUN as a SCSI device and format it. Then have to handle standard backup/data transfer process of moving the data between the ESXi VM guest filesystem to the new LUN. I believe it is possible to actually create an iSCSI LUN on the NAS and mount it directly to the VM running under ESXi if you install the necessary dependencies to make the data transfer easier, then just unmount and mount to the host outside ESXi. Depends if you actually want to attempt making changes to the existing VM or not.
>
> If as I read you're really only needing the user home directories, I would agree that building the bare metal with an SSD for the OS and whatever software needs to be installed is best course and then I'd mount the home directories from the NAS but you may just want to use NFS instead of iSCSI for that which would be a much more simple solution.
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> https://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo


More information about the Ale mailing list