Comprehensive data protection for all workloads
Post Reply
lando_uk
Expert
Posts: 289
Liked: 21 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Restore times question

Post by lando_uk » Apr 21, 2016 11:54 am

Hi

I have a question about restore times.

I'm doing a test restore of a Linux VM, from the incremental from last night, it has a 18GB vmdk1 and a 1.4TB vmdk2.

The 18GB vmdk restored at about 70 MB/s
The 1.4TB vmdk restores at about 180 MB/s

Is this speed difference because the 18GB is spread evenly over the VBK and 5 x VIBs, so it has to access all of those files at the same time, random read across the disk, hence the slow speed.
But the larger file is mainly stored on the VBK, so that the majority of the restore is from a single file, so a more sequential, faster read?

Basically I'm suffering from slow repository performance?

Thanks
Mark

Gostev
SVP, Product Management
Posts: 23846
Liked: 3207 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Restore times question

Post by Gostev » Apr 21, 2016 6:08 pm

Hi, Mark

I would not call the above numbers "suffering" but yes, restoring that 18GB disk likely involved more random I/O relatively to 1.4TB VMDK.

Thanks!

lando_uk
Expert
Posts: 289
Liked: 21 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Re: Restore times question

Post by lando_uk » Apr 22, 2016 11:52 am

Hi Gostev

The 180MB/s went down to 100MB/s in the end - everything is on 10Gbe now.

I tried the same type of restore from a RAID6 and RAID10 repository and they were similar speeds, I was hoping for maybe a 50% increase on RAID10 but nadda... If my primary backups were on SSD, would I see 600MB/s restores or are there other mechanisms at play that throttle throughput when using network restores?

Cheers
Mark

Vitaliy S.
Product Manager
Posts: 22127
Liked: 1380 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Restore times question

Post by Vitaliy S. » Apr 22, 2016 2:11 pm

lando_uk wrote:If my primary backups were on SSD, would I see 600MB/s restores or are there other mechanisms at play that throttle throughput when using network restores?
Data traffic is also throttled by ESXi network management interface. If you want to get higher performance, try to do restores via hotadd backup proxy.

lando_uk
Expert
Posts: 289
Liked: 21 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Re: Restore times question

Post by lando_uk » Apr 25, 2016 5:46 pm

Is there a link confirming that esxi throttles its management network? I'd like to disable this if I can....

Gostev
SVP, Product Management
Posts: 23846
Liked: 3207 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Restore times question

Post by Gostev » Apr 26, 2016 12:22 am

Unfortunately, it is impossible to disable... this is just how ESXi is designed to handle management interface traffic.

lando_uk
Expert
Posts: 289
Liked: 21 times
Joined: Oct 17, 2013 10:02 am
Full Name: Mark
Location: UK
Contact:

Re: Restore times question

Post by lando_uk » Jun 16, 2016 4:23 pm

Vitaliy S. wrote: Data traffic is also throttled by ESXi network management interface. If you want to get higher performance, try to do restores via hotadd backup proxy.
Its taken a while, but I finally did some tests using network and hotadd transports.

Outcome, mostly the same. 250 GB (used space) VM. 31 mins to restore using network, 37 minutes to restore using hotadd.

Again, I don't understand why I'm limited to approx. 1 Gbps (100MB/s) max throughput when all my connectivity is 10Gbps. When I run an IOMeter from the proxy VM to the destination storage I get 800 MB/s.

Something fishy going on :?

Post Reply

Who is online

Users browsing this forum: Google [Bot], Qasim, xrated, YoMarK and 68 guests