[ale] lvconvert adding mirror speed considerations
DJ-Pfulio
djpfulio at jdpfu.com
Thu Dec 1 17:41:37 EST 2016
Which is more important? Availability or speed of migration?
The speed improvement will depend on the active disk utilization - both read and
writes matter. If those are removed and sequential writes are allowed, disks
tend to be significantly faster.
You did say TB, not PB, correct?
rsync when it is live. Take the source storage offline to users (scheduled
maintenance) and do a final rsync to mirror everything? Might seem old-school,
but 7TB really isn't that much data these days. Let me sync about 9TB now ...
this storage is mostly 1-3GB files and was seeded a few days ago ... rsync
--stats are showing between 7 and 100MB/s transfers. Most are about 75MB/s.
There are 3 source disks/partitions and 3 targets (shouldn't be a surprise to
anyone who knows me).
Rsync Summary A:
Number of files: 20,582 (reg: 18,127, dir: 2,447, link: 8)
Number of created files: 36 (reg: 36)
Number of deleted files: 10 (reg: 8, dir: 2)
Number of regular files transferred: 38
Total file size: 3,418,476,086,884 bytes
Total transferred file size: 12,632,280,987 bytes
Literal data: 12,632,280,987 bytes
Matched data: 0 bytes
File list size: 65,534
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 12,636,028,642
Total bytes received: 3,965
sent 12,636,028,642 bytes received 3,965 bytes 76,350,650.19 bytes/sec
total size is 3,418,476,086,884 speedup is 270.53
Rsync Summary B:
Number of files: 123,005 (reg: 119,937, dir: 3,068)
Number of created files: 27 (reg: 26, dir: 1)
Number of deleted files: 2 (reg: 2)
Number of regular files transferred: 26
Total file size: 1,793,320,930,697 bytes
Total transferred file size: 19,303,566,059 bytes
Literal data: 19,303,566,059 bytes
Matched data: 0 bytes
File list size: 2,227,866
File list generation time: 0.018 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 19,312,541,451
Total bytes received: 4,237
sent 19,312,541,451 bytes received 4,237 bytes 115,298,780.23 bytes/sec
total size is 1,793,320,930,697 speedup is 92.86
Rsync Summary C:
Number of files: 6,407 (reg: 5,665, dir: 742)
Number of created files: 11 (reg: 10, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 38
Total file size: 2,070,017,397,240 bytes
Total transferred file size: 6,133,675,279 bytes
Literal data: 6,133,675,279 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.034 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 6,135,403,631
Total bytes received: 1,569
sent 6,135,403,631 bytes received 1,569 bytes 64,245,080.63 bytes/sec
total size is 2,070,017,397,240 speedup is 337.39
total time:
real 7m7.068s
user 2m36.256s
sys 0m39.482s
These are USB3, non-enterprise, disks with LVM (no RAID). Certainly the disk
arrays will be faster. I didn't take down any VMs or stop any batch processing
to aid performance.
The first rsync (seed) with everything up will take some time. The 2nd will be
like the one above most likely, 10min, but that will depend on the amount of
data changed between. You can probably do some simple math for the estimated
time to do the 1st rsync based on the new array write performance and historical
disk performance data you've captured the last few years.
I'm not sure it will be faster and if availability is more important than a fast
changeover, I'd stick with the LVM sync/split method ... or use a sheepdog
cluster. ;)
On 12/01/2016 05:04 PM, Lightner, Jeffrey wrote:
> We’re in the process of migrating from one disk array to another.
>
>
>
> The method we’re using is to zone in the new array to the same fabric(s) in
> which we have the existing servers and old array then allocate storage from the
> new array to the servers.
>
> We then use vgextend to add the new storage device PVs to the existing VG and
> also add a small device to be used for LVM mirror logging. After that we use
> lvconvert to turn on mirroring for an LV specifying the new PVs (including the
> log device).
> Once the mirror is complete we do lvconvert to turn off mirroring specifying the
> old PVs to remove (and the log device).
>
> This worked fine for a couple of smaller LVs we did yesterday. We then started
> one to a 6.3 TB LV. That has now been running for over 24 hours and appears it
> will complete around 10 PM tonight.
>
>
>
> I suspect this is taking a long time because it has an underlying database on
> the LV and we are doing this with that database online.
>
>
>
> We intend to do a test with a separate instance where we shut down the database
> so it is quiesced. I’m just wondering if anyone has done this kind of thing
> before and if so whether you significant improvement when running the mirror
> creation with everything quiesced as opposed to with everything running?
>
>
>
> Alternatively does anyone know of any tricks that would help increase the speed?
>
>
>
> The above is fine for test/dev but we wouldn’t want this much downtime on our
> main Production database and aren’t sure we would want to do it online anyway
> owing to other considerations related to backup windows.
>
> Also I’m curious if anyone knows if there is a way with lvconvert to force it to
> use a specific PV device as the mlog device> So far it appears to use the
> small one we created automatically. However, since I don’t see a way to
> specify the mlog device it isn’t clear if it is just automatically picking the
> smallest one or not. It doesn’t appear to matter what order we give the mlog
> device in the lvconvert command along with the other PVs intended for the data
> rather than the log.
>
>
More information about the Ale
mailing list