Host-based backup of VMware vSphere VMs.
Post Reply
ama
Novice
Posts: 5
Liked: never
Joined: Feb 18, 2016 2:44 pm
Contact:

Direct Access Fiber Performance

Post by ama »

Hello All,

We just bought Veeam and i'm testing it to get the best performance possible.

I set up the Direct Storage Access.

My configuration is :
Storage: DELL Compellent SC8000x2 + 4 enclosures (64TB) (15K Tier 1 and 7k Tier 3)
FC Switches: Brocade 6505 16GB
Veeam Server: Physical R720 2x Xeon E5-2640 2.5 Ghz + 64GB Ram + Dual port Emulex LPe16002B-M6-D + Intel 10Gb Network adapter X540

I'm storing my backup on a Synology RS3614xs+ With a 10Gb Network adapter throug a Netgear 10Gb switch.

Right now i'm getting these perf :

Code: Select all

29-03-16 11:16:12 :: Queued for processing at 29-03-16 11:16:12 
29-03-16 11:16:13 :: Required backup infrastructure resources have been assigned 
29-03-16 11:16:18 :: VM processing started at 29-03-16 11:16:18 
29-03-16 11:16:18 :: VM size: 590,0 GB 
29-03-16 11:17:01 :: Getting VM info from vSphere 
29-03-16 11:17:08 :: Creating VM snapshot 
29-03-16 11:17:26 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.vmx 
29-03-16 11:17:27 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.vmxf 
29-03-16 11:17:27 :: Saving [xxxxxxxxxxxxxxxx] xxxxxxxxxxxxx/xxxxxxxxxxx.nvram 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [san] 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 3 [san] 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 5 [san] 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 2 [san] 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 6 [san] 
29-03-16 11:17:28 :: Using backup proxy VMware Backup Proxy for disk Hard disk 4 [san] 
29-03-16 11:17:30 :: Hard disk 4 (200,0 GB) 131,1 GB read at 179 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 6 (20,0 GB) 13,7 GB read at 84 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 2 (100,0 GB) 51,5 GB read at 100 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 1 (60,0 GB) 42,0 GB read at 119 MB/s [CBT]
29-03-16 11:17:30 :: Hard disk 5 (50,0 GB) 249,0 MB read at 144 MB/s [CBT]
29-03-16 11:17:31 :: Hard disk 3 (150,0 GB) 8,8 GB read at 117 MB/s [CBT]
29-03-16 11:18:08 :: Using backup proxy VMware Backup Proxy for disk Hard disk 7 [san] 
29-03-16 11:18:26 :: Hard disk 7 (10,0 GB) 117,0 MB read at 79 MB/s [CBT]
29-03-16 11:30:16 :: Removing VM snapshot 
29-03-16 11:31:09 :: Finalizing 
29-03-16 11:31:16 :: Swap file blocks skipped: 7,5 GB 
29-03-16 11:31:17 :: Busy: Source 92% > Proxy 57% > Network 24% > Target 24% 
29-03-16 11:31:17 :: Primary bottleneck: Source 
29-03-16 11:31:17 :: Network traffic verification detected no corrupted blocks 
29-03-16 11:31:17 :: Processing finished at 29-03-16 11:31:17 
Processing rate 338MB/s

Can you please tell me what processing rate you get with Direct Storage Access ? (feel free to comment).

Thanks
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct Access Fiber Performance

Post by Vitaliy S. »

Hello,

Processing rate depends on our SAN hardware, but your current performance looks good to me. Based on the stats from the job log, your bottleneck is reported as source, so if you want to further improve the performance you should be looking at source storage and fabric. On a side note, there is an existing topic that might be useful to read > Direct Storage Access FC - slow

Thank you!
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Direct Access Fiber Performance

Post by Gostev »

Looks quite decent to me as well for a single VM. Storage and fabric should be able to do more, most likely you are starting to hit VMware API limitations. Processing multiple VMs simultaneously should further increase overall backup throughtput. Thanks!
ama
Novice
Posts: 5
Liked: never
Joined: Feb 18, 2016 2:44 pm
Contact:

Re: Direct Access Fiber Performance

Post by ama »

Hi,

Thank you for your reply.

Is there some FC Adapter parameters i can tweak to maximize performance ?

Mine is Emulex LightPulse LPe16002B-M6-D 2-Port 16Gb Fibre Channel Adapter

There is a parameter called :

PerfMode = Performance mode: 0: Default = disabled: 1-16: enhanced performance modes

But i can't find any explanation in the Emulex documentation...

Anyway i set it to 4 and i got better performance but not a big improvement before setting it to 16 i would like to find an explanation of this ...

Any FC expert around ?

Thanks
Vitaliy S.
VP, Product Management
Posts: 27055
Liked: 2710 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct Access Fiber Performance

Post by Vitaliy S. »

Maybe Emulex community/engineers would be able to assist with this parameter. Quick search didn't return any valid results for me.
ama
Novice
Posts: 5
Liked: never
Joined: Feb 18, 2016 2:44 pm
Contact:

Re: Direct Access Fiber Performance

Post by ama »

Ok thank you, the same for me, i carefully checked the Emulex web site to find doc but ... not found on this specific parameter ...
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Direct Access Fiber Performance

Post by Delo123 »

Looking at your specs I assume your Backup server will hit physical cpu limits (when enabling dedupe and compression) if you run 2 or 3 jobs/vm's in parallel.
I think the speeds you are getting now are about all Vsphere api allows as we also hit something around this limit on our all-flash arrays
Post Reply

Who is online

Users browsing this forum: No registered users and 71 guests