VEEAM NetApp Source Bottleneck issues and moving forward

VMware specific discussions

Re: VEEAM NetApp Source Bottleneck issues and moving forward

Veeam Logoby Brandon0830 » Wed Jun 08, 2016 2:52 am

Yeah, that just doesn't make any sense. You're almost better off just completely utilizing Snapshots/Vaulting/Mirroring and just using VEEAM as a restore tool (which it does really well with the integration). That's what I'm doing for my Dev/Test environment.
Brandon0830
Novice
 
Posts: 9
Liked: 1 time
Joined: Fri Feb 19, 2016 7:32 pm

Re: VEEAM NetApp Source Bottleneck issues and moving forward

Veeam Logoby plandata_at » Wed Jun 08, 2016 2:31 pm

Hey!

We have the same issue here. Using NetApp as NFS storage vor Vmware, Performance is always good, but when combining NetApp with veeam as a backup source, speed drops extreamly. Speed between 75Mb/s up to max. 300 MB/s
Using direct storage Access from veeam backup Proxy with NFS over 10G.
Would be glad for any tips how to solve this issue.
plandata_at
Enthusiast
 
Posts: 66
Liked: 10 times
Joined: Tue Jan 26, 2016 2:48 pm
Full Name: Plandata Datenverarbeitungs GmbH

Re: VEEAM NetApp Source Bottleneck issues and moving forward

Veeam Logoby orb » Wed Jun 08, 2016 3:52 pm

Brandon,

Your numbers doesn't add up even if I don't really really trust Veeam processions numbers :)

Can you summarise how your different Netapp are organised ? (Raid Group Size,Disks per aggregate)
Did you run disk statistic on each NetApp while a backup is running ? It should give you cpu usage and more important disk usage.

Do you have the NetApp vCenter integration ? Did you the tool validate the iSCSI/NFS settings ?
Stay away from CIFS repository, use iSCSI to use as much path you can.

Use multiple Repositories and Scale Out them to ensure all path are busy.
I bet your FAS2040 have GBit links (2 or 4) , I manage to get 160Mb/s with 24 SATA dissk in one big aggregate and multiple session. It is about 450GB/h if I am right (in full active mode, no synthetic)

This PDF should be your bible like any other papers written by Luca.
https://www.veeam.com/wp-veeam-backup-r ... mance.html

Oli
orb
Influencer
 
Posts: 17
Liked: 3 times
Joined: Fri Apr 01, 2016 5:36 pm
Full Name: Olivier Bonemme

Re: VEEAM NetApp Source Bottleneck issues and moving forward

Veeam Logoby plandata_at » Tue Jun 14, 2016 1:48 pm

Hi! I have done some more investigations on the source bottleneck and found out following:
a) our backup storage is using 8TB 10k SATA disks, so IOps are limited of course of disc size. have done statistics with sysstat on netapp, and discs are about 70-80% utilized when running backup. So this is of course a "natural" limitation.

BUT:
b) I have talked to a friend working lot with netapp and he told me that netapp is restricting traffic for every thread acessing storage, so that one thread is not able to completly block storage acces.

So i tried to run three jobs at the sime time with netapp NFS volume as a backup source instead of one --> processing numbers of veeam increase abot 2,5 - 3x !
So maybe try to run several jobs at the same time, or maybe (FEATURE REQUEST) Veeam could talk with netapp and increase this by creating parallel threads vor different VMs in one job?
plandata_at
Enthusiast
 
Posts: 66
Liked: 10 times
Joined: Tue Jan 26, 2016 2:48 pm
Full Name: Plandata Datenverarbeitungs GmbH

Re: VEEAM NetApp Source Bottleneck issues and moving forward

Veeam Logoby plandata_at » Tue Jun 14, 2016 1:59 pm

Brandon0830 wrote: My jobs almost exclusively have the bottleneck list as Source at 99% and the processing rate is typically 20 MB/s – 100 MB/s. 100+ is pretty rare but I’ve seen it before if only one job is running for example.
Production:
-NetAPP FAS 3250’s with VM’s on either 10K or 15K RPM SAS disks depending on the aggregate
-NFS 3.0 Datastores
-10GB Everywhere

Dev/Test:
-NETAPP FAS 8020’s with VM’s on SATA aggregates with Flash Pooling
-NFS 3.0 Datastores
-10GB Everywhere


Hi Brandon!

Have overseen your number still now. We are working with netapps for almost 8 years now. I have bigger numbers than you on small netapps with only 12 x 10K disks in RAID DP Aggregat without any flash pool. Also using direct NFS.
If you are really haveing 10G everywhere there must be some missconfiguration somewhere, your numbers are just awfull! Maybe review you configuration and take a look if realy the 10G network interface is used by backup. (on the netapp look with sysstat, and look on your veeam proxy with perfmon...) Also look at your raid group sizes, how many raid groups are uses?
And if you have dual controller, check if you access the NFS volume thourgh the controller where the aggreate is active of threw the other one sou you ar eusing internal connection. Take some deeper looks as systtat, netapp processors, if youz have activated compression, full dedups running during backup, any qos policies accidently definied and so on........
plandata_at
Enthusiast
 
Posts: 66
Liked: 10 times
Joined: Tue Jan 26, 2016 2:48 pm
Full Name: Plandata Datenverarbeitungs GmbH

Previous

Return to VMware vSphere



Who is online

Users browsing this forum: Google [Bot] and 26 guests