Direct Access Fiber Performance

VMware specific discussions

Direct Access Fiber Performance

Veeam Logoby ama » Tue Mar 29, 2016 9:45 am

Hello All,

We just bought Veeam and i'm testing it to get the best performance possible.

I set up the Direct Storage Access.

My configuration is :
Storage: DELL Compellent SC8000x2 + 4 enclosures (64TB) (15K Tier 1 and 7k Tier 3)
FC Switches: Brocade 6505 16GB
Veeam Server: Physical R720 2x Xeon E5-2640 2.5 Ghz + 64GB Ram + Dual port Emulex LPe16002B-M6-D + Intel 10Gb Network adapter X540

I'm storing my backup on a Synology RS3614xs+ With a 10Gb Network adapter throug a Netgear 10Gb switch.

Right now i'm getting these perf :

Code: Select all
29-03-16 11:16:12 :: Queued for processing at 29-03-16 11:16:12
29-03-16 11:16:13 :: Required backup infrastructure resources have been assigned
29-03-16 11:16:18 :: VM processing started at 29-03-16 11:16:18
29-03-16 11:16:18 :: VM size: 590,0 GB
29-03-16 11:17:01 :: Getting VM info from vSphere
29-03-16 11:17:08 :: Creating VM snapshot
29-03-16 11:17:26 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.vmx
29-03-16 11:17:27 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.vmxf
29-03-16 11:17:27 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.nvram
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [san]
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 3 [san]
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 5 [san]
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 2 [san]
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 6 [san]
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 4 [san]
29-03-16 11:17:30 :: Hard disk 4 (200,0 GB) 131,1 GB read at 179 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 6 (20,0 GB) 13,7 GB read at 84 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 2 (100,0 GB) 51,5 GB read at 100 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 1 (60,0 GB) 42,0 GB read at 119 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 5 (50,0 GB) 249,0 MB read at 144 MB/s [CBT]
29-03-16 11:17:31 :: Hard disk 3 (150,0 GB) 8,8 GB read at 117 MB/s [CBT]
29-03-16 11:18:08 :: Using backup proxy VMware Backup Proxy for disk Hard disk 7 [san]
29-03-16 11:18:26 :: Hard disk 7 (10,0 GB) 117,0 MB read at 79 MB/s [CBT]
29-03-16 11:30:16 :: Removing VM snapshot
29-03-16 11:31:09 :: Finalizing
29-03-16 11:31:16 :: Swap file blocks skipped: 7,5 GB
29-03-16 11:31:17 :: Busy: Source 92% > Proxy 57% > Network 24% > Target 24%
29-03-16 11:31:17 :: Primary bottleneck: Source
29-03-16 11:31:17 :: Network traffic verification detected no corrupted blocks
29-03-16 11:31:17 :: Processing finished at 29-03-16 11:31:17

Processing rate 338MB/s

Can you please tell me what processing rate you get with Direct Storage Access ? (feel free to comment).

Thanks
ama
Novice
 
Posts: 5
Liked: never
Joined: Thu Feb 18, 2016 2:44 pm

Re: Direct Access Fiber Performance

Veeam Logoby Vitaliy S. » Tue Mar 29, 2016 12:15 pm

Hello,

Processing rate depends on our SAN hardware, but your current performance looks good to me. Based on the stats from the job log, your bottleneck is reported as source, so if you want to further improve the performance you should be looking at source storage and fabric. On a side note, there is an existing topic that might be useful to read > Direct Storage Access FC - slow

Thank you!
Vitaliy S.
Veeam Software
 
Posts: 19539
Liked: 1097 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Direct Access Fiber Performance

Veeam Logoby Gostev » Tue Mar 29, 2016 5:04 pm

Looks quite decent to me as well for a single VM. Storage and fabric should be able to do more, most likely you are starting to hit VMware API limitations. Processing multiple VMs simultaneously should further increase overall backup throughtput. Thanks!
Gostev
Veeam Software
 
Posts: 21385
Liked: 2348 times
Joined: Sun Jan 01, 2006 1:01 am
Location: Baar, Switzerland

Re: Direct Access Fiber Performance

Veeam Logoby ama » Wed Mar 30, 2016 6:40 am

Hi,

Thank you for your reply.

Is there some FC Adapter parameters i can tweak to maximize performance ?

Mine is Emulex LightPulse LPe16002B-M6-D 2-Port 16Gb Fibre Channel Adapter

There is a parameter called :

PerfMode = Performance mode: 0: Default = disabled: 1-16: enhanced performance modes

But i can't find any explanation in the Emulex documentation...

Anyway i set it to 4 and i got better performance but not a big improvement before setting it to 16 i would like to find an explanation of this ...

Any FC expert around ?

Thanks
ama
Novice
 
Posts: 5
Liked: never
Joined: Thu Feb 18, 2016 2:44 pm

Re: Direct Access Fiber Performance

Veeam Logoby Vitaliy S. » Wed Mar 30, 2016 12:55 pm

Maybe Emulex community/engineers would be able to assist with this parameter. Quick search didn't return any valid results for me.
Vitaliy S.
Veeam Software
 
Posts: 19539
Liked: 1097 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: Direct Access Fiber Performance

Veeam Logoby ama » Fri Apr 01, 2016 8:57 am

Ok thank you, the same for me, i carefully checked the Emulex web site to find doc but ... not found on this specific parameter ...
ama
Novice
 
Posts: 5
Liked: never
Joined: Thu Feb 18, 2016 2:44 pm

Re: Direct Access Fiber Performance

Veeam Logoby Delo123 » Fri Apr 08, 2016 12:26 pm

Looking at your specs I assume your Backup server will hit physical cpu limits (when enabling dedupe and compression) if you run 2 or 3 jobs/vm's in parallel.
I think the speeds you are getting now are about all Vsphere api allows as we also hit something around this limit on our all-flash arrays
Delo123
Expert
 
Posts: 348
Liked: 94 times
Joined: Fri Dec 28, 2012 5:20 pm
Full Name: Guido Meijers


Return to VMware vSphere



Who is online

Users browsing this forum: No registered users and 31 guests