-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Restore times question
Hi
I have a question about restore times.
I'm doing a test restore of a Linux VM, from the incremental from last night, it has a 18GB vmdk1 and a 1.4TB vmdk2.
The 18GB vmdk restored at about 70 MB/s
The 1.4TB vmdk restores at about 180 MB/s
Is this speed difference because the 18GB is spread evenly over the VBK and 5 x VIBs, so it has to access all of those files at the same time, random read across the disk, hence the slow speed.
But the larger file is mainly stored on the VBK, so that the majority of the restore is from a single file, so a more sequential, faster read?
Basically I'm suffering from slow repository performance?
Thanks
Mark
I have a question about restore times.
I'm doing a test restore of a Linux VM, from the incremental from last night, it has a 18GB vmdk1 and a 1.4TB vmdk2.
The 18GB vmdk restored at about 70 MB/s
The 1.4TB vmdk restores at about 180 MB/s
Is this speed difference because the 18GB is spread evenly over the VBK and 5 x VIBs, so it has to access all of those files at the same time, random read across the disk, hence the slow speed.
But the larger file is mainly stored on the VBK, so that the majority of the restore is from a single file, so a more sequential, faster read?
Basically I'm suffering from slow repository performance?
Thanks
Mark
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Restore times question
Hi, Mark
I would not call the above numbers "suffering" but yes, restoring that 18GB disk likely involved more random I/O relatively to 1.4TB VMDK.
Thanks!
I would not call the above numbers "suffering" but yes, restoring that 18GB disk likely involved more random I/O relatively to 1.4TB VMDK.
Thanks!
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Restore times question
Hi Gostev
The 180MB/s went down to 100MB/s in the end - everything is on 10Gbe now.
I tried the same type of restore from a RAID6 and RAID10 repository and they were similar speeds, I was hoping for maybe a 50% increase on RAID10 but nadda... If my primary backups were on SSD, would I see 600MB/s restores or are there other mechanisms at play that throttle throughput when using network restores?
Cheers
Mark
The 180MB/s went down to 100MB/s in the end - everything is on 10Gbe now.
I tried the same type of restore from a RAID6 and RAID10 repository and they were similar speeds, I was hoping for maybe a 50% increase on RAID10 but nadda... If my primary backups were on SSD, would I see 600MB/s restores or are there other mechanisms at play that throttle throughput when using network restores?
Cheers
Mark
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Restore times question
Data traffic is also throttled by ESXi network management interface. If you want to get higher performance, try to do restores via hotadd backup proxy.lando_uk wrote:If my primary backups were on SSD, would I see 600MB/s restores or are there other mechanisms at play that throttle throughput when using network restores?
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Restore times question
Is there a link confirming that esxi throttles its management network? I'd like to disable this if I can....
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Restore times question
Unfortunately, it is impossible to disable... this is just how ESXi is designed to handle management interface traffic.
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Restore times question
Its taken a while, but I finally did some tests using network and hotadd transports.Vitaliy S. wrote: Data traffic is also throttled by ESXi network management interface. If you want to get higher performance, try to do restores via hotadd backup proxy.
Outcome, mostly the same. 250 GB (used space) VM. 31 mins to restore using network, 37 minutes to restore using hotadd.
Again, I don't understand why I'm limited to approx. 1 Gbps (100MB/s) max throughput when all my connectivity is 10Gbps. When I run an IOMeter from the proxy VM to the destination storage I get 800 MB/s.
Something fishy going on
Who is online
Users browsing this forum: No registered users and 103 guests