Comprehensive data protection for all workloads
Post Reply
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

SAN Backup Speed: Is this normal?

Post by ctchang »

Environment:
Poweredge R610 12GB E5620 (2.4Ghz) Quad Core
Equallogic PS6000XV (15K RPM) with 2 Paths MPIO HIT Kit installed.
SAN Mode


1. Using IOMeter to test read/write using MPIO via both links, total NIC speed can reach 220MB/s both way.
2. Did a test to backup a 100GB VM via SAN Mode (1st time is full backup, dedup and normal compression), total NIC speed is Only 45-50MB/s, and CPU is about 50% across all 4 cores.

I would like to know where is the bottoleneck? How can I boost up the speed? I knew it can reach 220MB/s, but only 45-50MB/s is 1/4 of the full capacity. Do I need to setup several jobs to push the limit? but why a single job is not able to push to its limit?

Thanks.
Gostev
Chief Product Officer
Posts: 31809
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SAN Backup Speed: Is this normal?

Post by Gostev »

ctchang wrote:I would like to know where is the bottoleneck?
According to the data above, it is likely that the target storage speed is a bottleneck. Even if its spec is great on paper, I saw it many times when connectivity, controller cache settings or some hardware/firmware/driver issues would kill the performance.

And MPIO presence is second most likely reason, so you could try removing multipathing software. VCB never "liked" MPIO, there is big old topic about VCB this forum with multiple customers with different SAN storage sharing how removing MPIO improved backup performance significantly for them. And vStorage API is based on the same code as VCB...
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

Re: SAN Backup Speed: Is this normal?

Post by ctchang »

Gostev wrote: According to the data above, it is likely that the target storage speed is a bottleneck. Even if its spec is great on paper, I saw it many times when connectivity, controller cache settings or some hardware/firmware/driver issues would kill the performance.

And MPIO presence is second most likely reason, so you could try removing multipathing software. VCB never "liked" MPIO, there is big old topic about VCB this forum with multiple customers with different SAN storage sharing how removing MPIO improved backup performance significantly for them. And vStorage API is based on the same code as VCB...
1. Shouldn't be the target (ie, R610) as the same R610 server test with IOMeter read/write using MPIO via both links to EQL, total NIC speed can reach 220MB/s both way.

2. MPIO: I wasn't using MPIO until today, the performance is the same, single link or two 1gb links (MPIO) all around 50MB/s.

Anton, so are you saying it's actually abnormal, something should reach much over 50MB/s.

Btw, this 50MB/s speed IS NOT the one VM size/Time to complete backup, it's the average speed on the NIC that I observed during the backup.
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

Re: SAN Backup Speed: Is this normal?

Post by ctchang »

Gostev wrote: According to the data above, it is likely that the target storage speed is a bottleneck. Even if its spec is great on paper, I saw it many times when connectivity, controller cache settings or some hardware/firmware/driver issues would kill the performance.

And MPIO presence is second most likely reason, so you could try removing multipathing software. VCB never "liked" MPIO, there is big old topic about VCB this forum with multiple customers with different SAN storage sharing how removing MPIO improved backup performance significantly for them. And vStorage API is based on the same code as VCB...
I found that post How to improve backup speed: VCB performance in SAN mode :D , but I am still scratching my head as I just compared the backup result (MPIO) with yesterdays' (no MPIO), and....performance wise they are about the same.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: SAN Backup Speed: Is this normal?

Post by tsightler »

What settings did you use for the MPIO testing? Was it a simple, single, read only thread? For the results to be valid the IOmeter settings need to be similar to the pattern you will see from Veeam.
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

Re: SAN Backup Speed: Is this normal?

Post by ctchang »

tsightler wrote:What settings did you use for the MPIO testing? Was it a simple, single, read only thread? For the results to be valid the IOmeter settings need to be similar to the pattern you will see from Veeam.
It was a 100% read, 100% sequential test.

Anyway, I was able to bring the MPIO (2 paths) to total 80MB/s after running 3 full backup jobs at the same time, and the bottleneck started to occur at CPU, all 4 cores on E5620 (2.4Ghz) saturated around 95%, so adding one more CPU may help, but I don't think it's necessary as I guess the subsquential backups won't use that much CPU anyway even with combined jobs.

So my guess is a single backup job (no matter 1st time full or future subsquential) will not fully ultilize the host resources someshow.
Gostev
Chief Product Officer
Posts: 31809
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SAN Backup Speed: Is this normal?

Post by Gostev »

I guess this depends on how well SAN and HBA work together, and with vStorage API. In my testing with this FC8 SAN (single path, no MPIO) I could definitely see single job fully saturating all available resources. Backup server was few year old single CPU quad core, so with the default job settings (Optimal compression) the single job full backup speed was about 55MB/s with CPU load @ 100%. Changing compression to Low (to remove CPU bottleneck) made the full backup speed on the same VM jump to 166MB/s with CPU load again @ 100%.

Testing was performed on VMDK with random content (which is worst case scenario - results in effectively no compression or dedupe). We test it this way to remove positive performance effects from empty, well-compressable, or repeating blocks (no good for performance testing).
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

Re: SAN Backup Speed: Is this normal?

Post by ctchang »

Update:

I just did a quick test after seeing your repy, running 3 jobs (full backup of a single VM in each job) concurrently WITHOUT any compression, CPU is around 35% only and MPIO iSCSI paths (2 NICS) were almost saturated this time to 180MB/s.

Any conclusion from the above where the bottleneck could be?
Gostev
Chief Product Officer
Posts: 31809
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SAN Backup Speed: Is this normal?

Post by Gostev »

I did not realize you have iSCSI SAN... have you had a chance to check out sticky FAQ section on tuning iSCSI performance?
ctchang
Expert
Posts: 115
Liked: 1 time
Joined: Sep 15, 2010 3:12 pm
Contact:

Re: SAN Backup Speed: Is this normal?

Post by ctchang »

Gostev wrote:I did not realize you have iSCSI SAN... have you had a chance to check out sticky FAQ section on tuning iSCSI performance?
Update:

I think I got it solved, 1 single job full backup went to 100MB/s compares to before 45MB/s.

1. Enabled almost everything, I know joe suggested disable the Auto-Tuning Level

TCP Global Parameters
----------------------------------------------
Receive-Side Scaling State : enabled
Chimney Offload State : enabled
NetDMA State : enabled
Direct Cache Acess (DCA) : enabled
Receive Window Auto-Tuning Level : normal
Add-On Congestion Control Provider : ctcp
ECN Capability : enabled
RFC 1323 Timestamps : disabled

2. On each iSCSI NIC, enable all the Offload feature as well as enable Interrupt Mod., RSS, etc, so netstat -t will show my iSCSI NICs status all set to "Offloaded"

Strange, seemed everyone has their own way out.
Post Reply

Who is online

Users browsing this forum: d.artzen, MarkusN, oscarm and 260 guests