Host-based backup of VMware vSphere VMs.
Post Reply
BackupTest90
Influencer
Posts: 21
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

SAN Restore Performance

Post by BackupTest90 »

Hey Guys,
I would like to ask you about your SAN-Restore performance?
We have two different Storage Array (3PAR and Nimble AF) and got only a restoreperformance from aprox. 50MB/s to 80MB/s by an entire VM-restore. It is no difference, if it a VM with one or multiple VMDKs (restore mode thick eager). Both arrays are connected with multiple 8Gbit/s FC connections and the HW-Proxys have per fabric one 8Gbit/s connection.

Do you have other values? What are your experience? In my opinion, it is very slow. Is there any possiblity to speed up this?

Regards,
mdiver
Veeam Legend
Posts: 229
Liked: 37 times
Joined: Nov 04, 2009 2:08 pm
Contact:

Re: SAN Restore Performance

Post by mdiver »

Hi BackupTest90.

What's you speed with an active-full from the same box? I would expect almost the same rates for a full-VM recovery through the same proxy. Especially for AF.

Are you sure you're getting a SAN based restore? Does it say so in the protocol?

You can only do SAN based restores with thick-provisioned disks.
Also you need write access via your proxy accessing the LUNs.
In the other cases Veeam will failover to network silently (if allowed to).

Thanks,
Mike
BackupTest90
Influencer
Posts: 21
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

Re: SAN Restore Performance

Post by BackupTest90 »

Hi Mike,
yes, the protocol said e.g "Restoring Hard disk 4 (90 GB) : 58,9 GB restored at 70 MB/s [san]" and "SAN" means, all prerequesites fullfills (Thick Disk, Proxy Access to VMDKs,..)

I've did an active full of two VMs and the result below:

VM1
29.07.2020 19:32:17 :: Using backup proxy XXXX for retrieving Hard disk 1 data from storage snapshot
29.07.2020 19:32:36 :: Hard disk 1 (60 GB) 38,9 GB read at 614 MB/s [CBT]

VM2
29.07.2020 19:32:19 :: Using backup proxy XXXX for retrieving Hard disk 1 data from storage snapshot
29.07.2020 19:32:36 :: Hard disk 1 (60 GB) 44,5 GB read at 513 MB/s [CBT]

I thought that I got similar restoretimes..

Regards,
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: SAN Restore Performance

Post by Andreas Neufert »

DirectSAN restore is way slower than the backup as any write has to be coordinated through vCenter. So vcenter connections and performance is as well critical here.

I suggest to add a HotAdd proxy for restore which should give you full throttle to what the storage system support from throughput perspective.

To reduce overhead on vcenter. You can try to add an esxi host by IP address to Veeam as managed server. Then use this one to restore the VM to. This allowes the ESXi host directly to control the reservations for writing blogs and is ususally faster for Direct SAN restore.

Overall the DirectSAN restore protocol is by design of VMware not the fastest one and potentially even NBD-network mode is faster.
PetrM
Veeam Software
Posts: 3624
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: SAN Restore Performance

Post by PetrM »

Hello,

One more idea is to ask our support team to check logs and to look for the "bottleneck", maybe restore speed is decreased due to slow data read from repository and not because of transport mode.

However, as a first step of troubleshooting it makes sense to perform tests suggested by Andreas.

Thanks!
BackupTest90
Influencer
Posts: 21
Liked: 4 times
Joined: Mar 20, 2019 12:09 pm
Full Name: Martin
Contact:

Re: SAN Restore Performance

Post by BackupTest90 » 2 people like this post

Oh, we identified a problem, with the MPIO Multipathing and solve this.

These are our SANvalues now

Restoring Hard disk 1 (25 GB) : 7 GB restored at 227 MB/s [san]
Restoring Hard disk 5 (200 GB) : 199 GB restored at 113 MB/s [san]
Restoring Hard disk 2 (100 GB) : 100 GB restored at 145 MB/s [san]
Restoring Hard disk 4 (100 GB) : 99,9 GB restored at 96 MB/s [san]
Restoring Hard disk 3 (100 GB) : 99,8 GB restored at 134 MB/s [san]


The Hotaddvalues below:(Same VMs as above, THIN Restore)
Restoring Hard disk 4 (25 GB) : 7 GB restored at 204 MB/s [hotadd]
Restoring Hard disk 2 (100 GB) : 100 GB restored at 140 MB/s [hotadd]
Restoring Hard disk 3 (100 GB) : 99,8 GB restored at 123 MB/s [hotadd]
Restoring Hard disk 1 (100 GB) : 99,9 GB restored at 90 MB/s [hotadd]
Restoring Hard disk 5 (200 GB) : 199 GB restored at 98 MB/s [hotadd]

Thanks for you ideas, I will test this with a standalone ESX.
lvmusso
Lurker
Posts: 1
Liked: never
Joined: Jan 20, 2021 3:30 pm
Full Name: Leandro
Contact:

Re: SAN Restore Performance

Post by lvmusso »

Hi Martin, could you please clarify when you said "Oh, we identified a problem, with the MPIO Multipathing and solve this." ... cause we're facing the same speed problem, and mpio could be an issue.

Thx!
Serger
Novice
Posts: 3
Liked: never
Joined: Jun 24, 2020 5:08 am
Full Name: Sergey
Contact:

Re: SAN Restore Performance

Post by Serger »

BackupTest90 wrote: Aug 03, 2020 8:53 am Oh, we identified a problem, with the MPIO Multipathing and solve this.
Hello Colleague!
Your restore speed is lookong good for now! Even better then "hotadd" :)
Can you tell please, are you using a Nimble storage for Backup repository? And what kind of multipathing issue you got?

Becouse we are having a similar issue with restore in SAN mode on 3Par from StoreOnce Catalyst store. We got two identical infrastructures and in one of them we can reach restore speed about 100-200 MB/s per VMDK. But in another one it is about 50-80 MB/s per disk. All hardware and software is the same. But, maybe we have lost some tuning options...

Thank you!
PetrM
Veeam Software
Posts: 3624
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: SAN Restore Performance

Post by PetrM » 1 person likes this post

Hi Sergey,

I'd suggest to raise a support case as well so that our engineers can examine this performance issue a bit deeper. Perhaps, the problem is not related to the transport mode in which proxy writes data to the target datastore but occurs due to slow read from the backup repository or because of slow data transmission from the repository to the target proxy. Please don't forget to share a support case ID with us.

Thanks!
foggy
Veeam Software
Posts: 21138
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: SAN Restore Performance

Post by foggy »

It is also worth testing exactly the same restore operation, say full VM restore of the same VM being backed up to two different targets. That way you'll get an apples-to-apples restore performance comparison as the actual data workload does matter.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], BostjanUNIJA and 68 guests