Discussions specific to the VMware vSphere hypervisor
Post Reply
BackupTest90
Influencer
Posts: 20
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

SAN Restore Performance

Post by BackupTest90 »

Hey Guys,
I would like to ask you about your SAN-Restore performance?
We have two different Storage Array (3PAR and Nimble AF) and got only a restoreperformance from aprox. 50MB/s to 80MB/s by an entire VM-restore. It is no difference, if it a VM with one or multiple VMDKs (restore mode thick eager). Both arrays are connected with multiple 8Gbit/s FC connections and the HW-Proxys have per fabric one 8Gbit/s connection.

Do you have other values? What are your experience? In my opinion, it is very slow. Is there any possiblity to speed up this?

Regards,

mdiver
Service Provider
Posts: 83
Liked: 15 times
Joined: Nov 04, 2009 2:08 pm
Location: Heidelberg, Germany
Contact:

Re: SAN Restore Performance

Post by mdiver »

Hi BackupTest90.

What's you speed with an active-full from the same box? I would expect almost the same rates for a full-VM recovery through the same proxy. Especially for AF.

Are you sure you're getting a SAN based restore? Does it say so in the protocol?

You can only do SAN based restores with thick-provisioned disks.
Also you need write access via your proxy accessing the LUNs.
In the other cases Veeam will failover to network silently (if allowed to).

Thanks,
Mike

BackupTest90
Influencer
Posts: 20
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

Re: SAN Restore Performance

Post by BackupTest90 »

Hi Mike,
yes, the protocol said e.g "Restoring Hard disk 4 (90 GB) : 58,9 GB restored at 70 MB/s [san]" and "SAN" means, all prerequesites fullfills (Thick Disk, Proxy Access to VMDKs,..)

I've did an active full of two VMs and the result below:

VM1
29.07.2020 19:32:17 :: Using backup proxy XXXX for retrieving Hard disk 1 data from storage snapshot
29.07.2020 19:32:36 :: Hard disk 1 (60 GB) 38,9 GB read at 614 MB/s [CBT]

VM2
29.07.2020 19:32:19 :: Using backup proxy XXXX for retrieving Hard disk 1 data from storage snapshot
29.07.2020 19:32:36 :: Hard disk 1 (60 GB) 44,5 GB read at 513 MB/s [CBT]

I thought that I got similar restoretimes..

Regards,

Andreas Neufert
VP, Product Management
Posts: 4368
Liked: 816 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: SAN Restore Performance

Post by Andreas Neufert »

DirectSAN restore is way slower than the backup as any write has to be coordinated through vCenter. So vcenter connections and performance is as well critical here.

I suggest to add a HotAdd proxy for restore which should give you full throttle to what the storage system support from throughput perspective.

To reduce overhead on vcenter. You can try to add an esxi host by IP address to Veeam as managed server. Then use this one to restore the VM to. This allowes the ESXi host directly to control the reservations for writing blogs and is ususally faster for Direct SAN restore.

Overall the DirectSAN restore protocol is by design of VMware not the fastest one and potentially even NBD-network mode is faster.

PetrM
Veeam Software
Posts: 563
Liked: 75 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: SAN Restore Performance

Post by PetrM »

Hello,

One more idea is to ask our support team to check logs and to look for the "bottleneck", maybe restore speed is decreased due to slow data read from repository and not because of transport mode.

However, as a first step of troubleshooting it makes sense to perform tests suggested by Andreas.

Thanks!

BackupTest90
Influencer
Posts: 20
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

Re: SAN Restore Performance

Post by BackupTest90 » 2 people like this post

Oh, we identified a problem, with the MPIO Multipathing and solve this.

These are our SANvalues now

Restoring Hard disk 1 (25 GB) : 7 GB restored at 227 MB/s [san]
Restoring Hard disk 5 (200 GB) : 199 GB restored at 113 MB/s [san]
Restoring Hard disk 2 (100 GB) : 100 GB restored at 145 MB/s [san]
Restoring Hard disk 4 (100 GB) : 99,9 GB restored at 96 MB/s [san]
Restoring Hard disk 3 (100 GB) : 99,8 GB restored at 134 MB/s [san]


The Hotaddvalues below:(Same VMs as above, THIN Restore)
Restoring Hard disk 4 (25 GB) : 7 GB restored at 204 MB/s [hotadd]
Restoring Hard disk 2 (100 GB) : 100 GB restored at 140 MB/s [hotadd]
Restoring Hard disk 3 (100 GB) : 99,8 GB restored at 123 MB/s [hotadd]
Restoring Hard disk 1 (100 GB) : 99,9 GB restored at 90 MB/s [hotadd]
Restoring Hard disk 5 (200 GB) : 199 GB restored at 98 MB/s [hotadd]

Thanks for you ideas, I will test this with a standalone ESX.

Post Reply

Who is online

Users browsing this forum: anindyasmukherjee, ortec-RW, rkoppelman and 27 guests