Comprehensive data protection for all workloads
Post Reply
texxx
Influencer
Posts: 12
Liked: 1 time
Joined: May 28, 2018 7:21 pm
Full Name: GM
Contact:

Restore speed using SAN connect - single disk performance

Post by texxx »

I have a physical Veeam server (Dell 730xd2) with 24 disks, REFS, Windows Server 2019. I'm connecting this server to a Pure FA-X10R all-flash array using Microsoft's MPIO iSCSI initiator over two 10GB fiber links. When I perform snapshot backups the performance is spectacular - 1GB/s processing speed is typical, but I'm running into a problem with restores.

When I restore a VM with multiple disks the restore speed maxes out to the limits of the spinning disks (around 2,100 IOPS) and then drops to ~1,000 IOPS when it gets to the last disk. On a single-disk VM the restore speed never gets much about 1,000 IOPS, despite both the array and the Veeam server being capable of double that rate.Just so happens that my largest VM (my file server) uses a single disk file and the 50% restore speed reduction is adding nearly an hour to the restore time.

Is there any way to get Veeam to use all the available resources when it's restoring a single file?
ejenner
Veteran
Posts: 636
Liked: 100 times
Joined: Mar 23, 2018 4:43 pm
Full Name: EJ
Location: London
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by ejenner »

Are you restoring a disk image or individual files? With individual files it will be slower.
texxx
Influencer
Posts: 12
Liked: 1 time
Joined: May 28, 2018 7:21 pm
Full Name: GM
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by texxx »

I'm performing a "Restore Entire VM" restore.
ejenner
Veteran
Posts: 636
Liked: 100 times
Joined: Mar 23, 2018 4:43 pm
Full Name: EJ
Location: London
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by ejenner »

You could have a look at the mount server configuration and VPower cache. Are you pulling one disk out of a larger backup or have you got the system set to create individual files for each VM?
texxx
Influencer
Posts: 12
Liked: 1 time
Joined: May 28, 2018 7:21 pm
Full Name: GM
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by texxx »

Individual files for each VM.

Not sure what I should be looking for on the mount server and vPower settings. I can't see anything related to mutil-threading or restore options when I look at the repository properties.
ejenner
Veteran
Posts: 636
Liked: 100 times
Joined: Mar 23, 2018 4:43 pm
Full Name: EJ
Location: London
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by ejenner »

I'm outta ideas by this stage. I'm not sure if your configuration is going to use the mount server for a restore. But you imagine if it is, then the speed of mounting the file before restoring it is going to slow down the process.

Have a look in the history section at the bottom left of your B&R Console window and select one of the problematic restores. Then review the steps are being carried out to see if you can identify which part of the process is causing the slow performance. That might help.
texxx
Influencer
Posts: 12
Liked: 1 time
Joined: May 28, 2018 7:21 pm
Full Name: GM
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by texxx »

Here's the job log. When the job starts and both hard disk2 and hard disk1 are restoring concurrently, the throughput is actually over 300MB/s and around 2,100 IOPS write speed at the array. As soon as disk1 finishes- long before disk2 - the IOPS drop to ~1,000 and the restore speed of disk2 never increases, despite another 1,000 IOPS now becoming available.

Code: Select all

9/7/2019 9:30:24 AM          Starting restore job
9/7/2019 9:30:24 AM          Restoring from PCC Repository
9/7/2019 9:30:28 AM          Locking required backup files
9/7/2019 9:30:33 AM          Queued for processing at 9/7/2019 9:30:33 AM
9/7/2019 10:59:14 AM          Processing corp-fs1
9/7/2019 9:30:33 AM          Required backup infrastructure resources have been assigned
9/7/2019 9:30:44 AM          9 files to restore (1.1 TB)
9/7/2019 9:30:45 AM          Restoring [Restore-Test] corp-fs1_restored/corp-fs1.vmx
9/7/2019 9:30:45 AM          Restoring file corp-fs1.vmxf (3.4 KB)
9/7/2019 9:30:45 AM          Restoring file corp-fs1.nvram (264.5 KB)
9/7/2019 9:30:49 AM          Registering restored VM on host: corp-esx02, pool: Resources, folder: vm, storage: Restore-Test
9/7/2019 9:30:54 AM          No VM tags to restore
9/7/2019 9:31:10 AM          Preparing for virtual disks restore
9/7/2019 9:31:10 AM          Using proxy VMware Backup Proxy for restoring disk Hard disk 2
9/7/2019 9:31:10 AM          Using proxy VMware Backup Proxy for restoring disk Hard disk 1
9/7/2019 10:58:56 AM          Restoring Hard disk 2 (1024.0 GB) : 845.2 GB restored at 164 MB/s  [san]
9/7/2019 9:32:39 AM          Restoring Hard disk 1 (75.0 GB) : 13.6 GB restored at 160 MB/s  [san]
9/7/2019 10:59:14 AM          Restore completed successfully
texxx
Influencer
Posts: 12
Liked: 1 time
Joined: May 28, 2018 7:21 pm
Full Name: GM
Contact:

Re: Restore speed using SAN connect - single disk performance

Post by texxx »

Quick update... found this post (vmware-vsphere-f24/ndb-faster-than-dire ... 61664.html) about direct SAN restores and a recommendation to switch the disk type to thick eager zeroed during the restore. I tried that on my file server VM that uses thick lazy zeroed, and it made a huge difference - three times the restore speed! The array is sitting at ~2,100 IOPS, even after the first disk fully restored, so the API calls when you restore as thick lazy are clearly causing the slowdown.

Restore to thick eager zero disk type:

Code: Select all

9/10/2019 11:03:11 AM          Required backup infrastructure resources have been assigned
9/10/2019 11:03:28 AM          9 files to restore (1.1 TB)
9/10/2019 11:03:28 AM          Restoring [Restore-Test] corp-fs1_restored/corp-fs1.vmx
9/10/2019 11:03:28 AM          Restoring file corp-fs1.vmxf (3.4 KB)
9/10/2019 11:03:28 AM          Restoring file corp-fs1.nvram (264.5 KB)
9/10/2019 11:03:33 AM          Registering restored VM on host: corp-esx02, pool: Resources, folder: vm, storage: Restore-Test
9/10/2019 11:03:37 AM          No VM tags to restore
9/10/2019 11:04:58 AM          Preparing for virtual disks restore
9/10/2019 11:04:59 AM          Using proxy VMware Backup Proxy for restoring disk Hard disk 2
9/10/2019 11:04:59 AM          Using proxy VMware Backup Proxy for restoring disk Hard disk 1
9/10/2019 11:05:39 AM          Restoring Hard disk 1 (75.0 GB) : 13.6 GB restored at 359 MB/s  [san]
9/10/2019 11:15:43 AM          Restoring Hard disk 2 (1024.0 GB) : 294.9 GB restored at 472 MB/s  [san]
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 141 guests