[ale] Xen Server adding a virtual disk to a VM

Scott Plante splante at insightsys.com
Mon Oct 17 15:40:03 EDT 2016


Yikes! I have btrfs partitions in XenServer guests all over the place! Oh, but not for the /boot partition. 


I do remember seeing this 
http://support.citrix.com/servlet/KbServlet/download/38323-102-715588/XenServer-6.5.0_VM%20User's%20Guide.pdf (p7) 



"Note: Customers should note that the Btrfs filesystem, the default in SLES 12, is not supported by XenServer. Customers should instead select a supported filesystem such as EXT3 or EXT4 for the /boot partition." 


I took that to mean *just* the /boot partition shouldn't be btrfs. It seems a bit ambiguous because the first sentence would imply it's not supported at all, but then why would the second sentence specify just the "/boot" partition? Btrfs isn't mentioned anywhere else in the User's Guide. Were you going by the same two sentences? Do I need to avoid btrfs for any partition inside a XenServer guest VM? 


Scott 

----- Original Message -----

From: "Allen Beddingfield" <allen at ua.edu> 
To: "Atlanta Linux Enthusiasts" <ale at ale.org> 
Sent: Monday, October 17, 2016 2:54:11 PM 
Subject: Re: [ale] Xen Server adding a virtual disk to a VM 

Oh, FYI - BTRFS is not supported in a XenServer guest...so ignore my second one there :D 

-- 
Allen Beddingfield 
Systems Engineer 
Office of Information Technology 
The University of Alabama 
Office 205-348-2251 
allen at ua.edu 

On 10/17/16, 1:52 PM, "ale-bounces at ale.org on behalf of Beddingfield, Allen" <ale-bounces at ale.org on behalf of allen at ua.edu> wrote: 

I usually do: 

/dev/sda1 - /boot (ext3) 
/dev/sda2 - swap 
/dev/sda3 - / (XFS) 

or if btrfs 

/dev/sda1 - swap 
/dev/sda2 / (btrfs) 

-- 
Allen Beddingfield 
Systems Engineer 
Office of Information Technology 
The University of Alabama 
Office 205-348-2251 
allen at ua.edu 

On 10/17/16, 1:49 PM, "ale-bounces at ale.org on behalf of Scott Plante" <ale-bounces at ale.org on behalf of splante at insightsys.com> wrote: 


Thanks guys. This thread has been very informative. 


So you don't LVM inside a VM, but do you partition? I've always partitioned because it's how I was taught (pre-VM), but suppose you have a Linux VM, and you want to add a 200GB partition for some application. You go into your VM software and create the 
virtual disk and attach it to the VM. Inside the VM it appears as a new device, say /dev/xvde. You could create a partition and /dev/xvde1 would appear and you could mkfs /dev/xvde1 or you could skip the partitioning and just mkfs /dev/xvde. One reason you 
generally partition is for the sector alignment stuff, but (correct me if I'm wrong) that doesn't apply to a virtual disk. The sector alignment would be taken care of when you partition the drive inside XenServer, VMWare or whatever's running on the bare metal. 
Another reason you might normally partition a drive is to separate your OS from your data, to make sure run-away log files don't crash your database, etc., but that doesn't apply here because you've already created a separate virtual disk for that purpose. 


I asked a friend at the pub Friday night who works with lots of VMs and he says he partitions just as a reminder to himself that he has or hasn't done something with the virtual disk. So he might go add a new disk to a half-dozen VMs, and when he goes 
into each one he can more easily tell whether he has taken care of it yet or something like that. If I add or remove a disk once a month it's a lot, so that's not a big selling point for me. Still, I suppose it could be useful as some longer term "documentation" 
kind of thing. 


So those of you on the list who deal with VMs: do you partition your virtual disks? 


Scott 


p.s. my recent VM experience is mostly with XenServer, so forgive me if my question and/or terminology doesn't make sense for ESXi, KVM, or other VM environments. 

________________________________________ 
From: "Phil Turmel" <philip at turmel.org> 
To: ale at ale.org 
Sent: Saturday, October 15, 2016 11:08:35 AM 
Subject: Re: [ale] Xen Server adding a virtual disk to a VM 

On 10/14/2016 05:13 PM, DJ-Pfulio wrote: 
> Ok, so fdisk was patched, but I'm still waiting for that patch to 
> actually make it into every distro I see. I keep seeing fdisk complain 
> about GPT disks - easier to just use parted, IMHO. Parted also aligns 
> partitions correctly, as does gparted. fdisk does not. If you use only 
> SSDs, don't think that it matters, but on spinning disks, there can be a 
> real, noticeable, performance hit. 

Interesting. I've been using 'gdisk' for quite some time now. Same 
style of interface but supports GPT, plus conversions to/from MBR and 
BSD. I thought is was packaged with util-linux, but I just found out 
otherwise. 

It is part of the base install of Ubuntu Server at least since 14.04. 
It came in as a default dependency of udisks on my gentoo systems, which 
is pulled in by a variety of things. So I assumed it was part of the 
system set. 

I like gdisk *way* more than parted. 

> GPT has many upgrades over MBR, like duplication at the front/end of the 
> storage, not only at the beginning. Plus not having to deal with 
> "logical/extended" partitions ever again is nice. Wikipdeia has more. 
> 
> Inside a VM, I don't don't use LVM. Only outside on the hostOS. There 
> are multiple pros/cons to either method. I can understand if folks want 
> LVM inside a VM and why they wouldn't. Do some research. 

I do the same. LVM on bare metal, not in VMs. All of my VM disks are 
LVs, not files. Virt-manager makes that easy, btw -- you can make any 
volume group in a host a "pool" for VM allocations. It was one of the 
final straws that got me off of virtualbox. 

> Haven't touched btrfs. Seems there is always some "issue" that is 
> important to me with it. Whether that is true or not is completely 
> irrelevant. It is a hassle that I don't need. Understand many people 
> love btrfs, which is great. More users will eventually fix the issues I 
> have! Thanks! 

Yup. I played with it once. Haven't touched it since. 

> lsblk is nice. Plus, it doesn't need sudo to work (at least not on any 
> systems I manage). 

I wrote lsdrv[1] because I didn't like the way lsblk repeated trees when 
raid arrays were present, and I wanted something that would document 
controller ports, device SNs, and UUIDs for later recovery tasks. 
Basically lsblk + blkid + lspci + lsusb in one report. 

Phil 

[1] https://github.com/pturmel/lsblk 

_______________________________________________ 
Ale mailing list 
Ale at ale.org 
http://mail.ale.org/mailman/listinfo/ale 
See JOBS, ANNOUNCE and SCHOOLS lists at 
http://mail.ale.org/mailman/listinfo 







_______________________________________________ 
Ale mailing list 
Ale at ale.org 
http://mail.ale.org/mailman/listinfo/ale 
See JOBS, ANNOUNCE and SCHOOLS lists at 
http://mail.ale.org/mailman/listinfo 



_______________________________________________ 
Ale mailing list 
Ale at ale.org 
http://mail.ale.org/mailman/listinfo/ale 
See JOBS, ANNOUNCE and SCHOOLS lists at 
http://mail.ale.org/mailman/listinfo 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20161017/17b9ac41/attachment.html>


More information about the Ale mailing list