[ale] Small Clusters for VMs

Jim Barlow jim at jimbarlow.com
Fri Oct 28 20:01:49 EDT 2016


+1 on oVirt

I've been running this three node setup with great results,  the ansible
playbooks makes the glusterfs setup trivial:

https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/

On Fri, Oct 28, 2016 at 5:01 PM, Jim Kinney <jkinney at jimkinney.us> wrote:

> On Fri, 2016-10-28 at 16:43 -0400, DJ-Pfulio wrote:
>
> Jim, Would you run oVirt at home for 2 boxes with dual-core CPUs and 8G of RAM
> each?  Make redundant storage and VMs.  THAT is the problem and I think there is
> a relatively simple solution with minimal config or scripting to solve it.
>
>
> Maybe. They've done all the heavy lifting to make a system that makes
> managing multiple VMs pretty easy for when those VMs need to just run.
> Granted, I'm more inclined to use enterprise-sized tools at home for a much
> smaller scale because my gain-knowledge-time at work is funded but my time
> at home is not.
>
> Originally, this was for a 4 node cluster. Absolutely. For a 2 node,
> probably not. virt-manager is pretty awesome at that scale.
>
> The spice viewer is pretty fantastic. Not sure if it can work with
> virt-manager. Being able to get a remote console through 2 VPNS and have a
> youtube video play with sound is a pretty good test of cool.
>
> On 10/28/2016 12:28 PM, Jim Kinney wrote:
>
>
> On Fri, 2016-10-28 at 10:49 -0400, DJ-Pfulio wrote:
>
>
> Thanks for responding.
>
>
>
> Won't be using oVirt (really RHEL only and seems to be 50+ different
> F/LOSS projects in 500 different languages [I exaggerate] ) or XenServer
> (bad taste after running it 4 yrs).  I've never regretted switching from
> ESX/ESXi and Xen to KVM, not once.
>
>
>
> Ovirt is only 49 projects and 127 languages! Really!
>
>
>
> If someone wants to run VMs on 3 nodes oVirt seems like overkill.  Different use
> case than a university, I suppose.
>
>
> It's not the 3 nodes, it's the 65 VMs :-)  The nodes have some horsepower.
>
> OK. I have 65 on 2 nodes. I've not yet lit up the new quad-node cluster in
> a box.
>
>
> A major issue for my use is the need to have certain VM up and running at all
> times. Ovirt provides a process to migrate a VM to an alternate host if it
> (host or VM) goes down. The only "gotcha" of that is the migration hosts must
> provide the same cpu capabilities so no mixing of AMD and Intel without
> setting the VMs to be i686.
>
>
>
> This similar CPU architecture requirement is a gotcha for all virtual machines
> that support migration.  KVM-qemu included.  I haven't figured out which is the
> my least capable CPU recently ... is a C2D less than a modern Pentium?  The
> Pentium is faster. I need to check the flags.
>
>
> I have a single Intel server on it's own cluster since all the rest of the
> gear is Opteron. No migration possible.
>
>
> Just doing research today. Need to sleep on it. Probably won't try
> anything until Sunday night.
>
>
>
> Plus I have to figure out who much storage to allocate for my trial with the
> distributed storage - 20G seems just a little small.  I have many different
> sorts of storage for the trial.  RAID10, Blue desktop disk, fast USB3 external,
> and an eSATA Black disk. Really want to see which performs the worst - thinking
> it will be the RAID10 stuff which is infiniband connected (got an amazing
> deal!), but really slow otherwise.
>
>
> I've found for VMs that storage space is not as much of an issue as RAM
> and clock cycles for what I use. My base VM has a 10G drive. If I need more
> I can expand the base drive or add a new drive. I still use LVM on the VM
> OS just so I can expand as needed.
>
> Things like memory ballooning are very useful as is VM thin clone with
> copy on write.
>
>
> Download CentOS 7.2 Install VM host version yum install epel-release Follow
> direction here: https://www.ovirt.org/release/4.0.4/ starting with: yum
> install ||
> <http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm>|http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm|
>
>
>
> So that install does libvirt, kvm-qemu, sshd, nfs, bridge-utils, and all the
> distributed storage stuff automatically?  Nice!
>
>
>
> <http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm> Be aware that
> when docs refer to NFS mounts, the server for that can be one of the nodes
> that has drive space. ISO space is where <duh> iso images are kept for
> installations. I have one win10 VM running now for a DBA with specialty tool
> needs.
>
>
>
> Have 1 Win7 VM running to record TV and run Quicken from time to time. It can
> be down, when nothing is being recorded ... so basically any time other than
> prime time or football time. ;)  It will be one of the first VMs I migrate into
> the sheepdog storage.  So will my daily-use desktop.
>
> The big difference in this planned architecture is that distributed storage can
> run on the VM hosts. Performance ISN'T the reason to do this.  10 users won't
> notice.
>
> That's my plan right now, anyway.  Sleep can alter it.
>
> From the comments, it appears is that
> a) nobody has used sheepdog in their environment (it isn't new).
> b) nobody is interested in cluster VMs on a small scale.
> c) nobody is interested in using small scale systems as redundant Linux storage
> for qemu VMs - someone did make a way to mount it outside a VM.
>      or
> d) everyone is busy enjoying fall and has more important things on their plates
> today!  Which I can understand.
>
> It is interesting how different people come at a problem and get different
> answers. ;)
> _______________________________________________
> Ale mailing listAle at ale.orghttp://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists athttp://mail.ale.org/mailman/listinfo
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20161028/88e4a418/attachment.html>


More information about the Ale mailing list