[ale] KVM Storage - Seeking advice
Phil Turmel
philip at turmel.org
Sun Apr 14 10:02:47 EDT 2019
On 4/12/19 11:04 AM, DJ-Pfulio via Ale wrote:
> Background
> ----------
> I'm setting up a new KVM VM system. It will run 10-30 VMs, but only 15
> will be core.
Sounds a lot like my office kit.
> I've been using KVM + libvirt for about a decade, so I'm fairly
> comfortable with it, but have always managed the storage allocations
> outside libvirt, as directory pools with raw img files. A few times,
> for non-data VMs, I've used QCOW2 files. I don't suspend VMs or
> snapshot for backups using libvirt. All backups are treated like
> physical machines from the inside of the VM. This has worked very well.
I provide each VM with one LV from the host. (Except the big DB -- it
gets a second host LV for the DB's tablespace.) My core VMs are backed
up from within, like yours. My non-core VMs are snapshotted on the host
as needed, if at all. Most of these are spun up by scripts and maintain
no critical state.
> But layering LVM outside the VM and inside the LVM for backup snapshots
> always felt .... inefficient.
Nah. LVM on the host greatly simplifies the storage stack -- no
filesystem involvement on the host. That by itself is worth it, whether
your VMs use LVM inside or not.
LVM on the host also makes it possible to migrate core VMs across
storage volumes without downtime. That's been handy a few times.
> I want to retain the inside-the-VM backup methods. They are very
> efficient and have saved me a few times every year. Bringing up a new
> system based on those backups takes 30-45 minutes, which is reasonable.
No need to change the layers that work.
> Seeking Guidance
> ----------------
> I've been looking at using a separate LV for each VM, but don't see what
> that buys me besides a little more complexity and block access to
> storage from inside the VM, for slightly more performance because a file
> system isn't in the middle.
I understand that zero-copy operations have spread throughout much of
the block layer. Not so much in filesystems. I haven't used image
files in years, but when I switched to LVs back then the performance
difference was astonishing.
> The new storage for the VMs is SATA SSD.
> They have been on a RAID1 spinning disk about a decade. I'm comfortable
> without the redundancy on the EVO SSD, with backups.
My setup has raid1 SSDs, raid10 fast spinning rust, and raid6 bulk
spinning rust. I wouldn't be comfortable without the redundancy. More
power to you. (I guess.)
> My research this week hasn't shown much greatness in dealing with the
> extra LVs. Read about 30 KVM+LVM articles and watched a few conference
> presentations about VM storage management.
Using LVM on the host does integrate well with virt-manager, as you can
expose your volume groups (individually) as storage pools. And
virt-manager then becomes a handy gui for LVM that cooperates with your
CLI/scripted LVM activities.
> I must be missing something. Are there other, single-node, options
> worth considering?
> Other options
> --------------
> The last few years, I had been looking at sheepdog as a KVM storage
> backend. It is fairly high performance and does N+1 replication across
> the nodes, based on the config. I've decided NOT to replicate storage
> in the new setup. Backups have been sufficient for a long time. Only 1-2
> of the VMs are important enough that I keep a spare VM on a different
> machine ready to boot if they fail. They've never failed in all these
> years.
I know nothing about sheepdog. I'm shocked you've never suffered from
the downtime a failed disk incurs in a non-redundant setup.
Phil
More information about the Ale
mailing list