-
- Expert
- Posts: 231
- Liked: 18 times
- Joined: Dec 07, 2009 5:09 pm
- Full Name: Chris
- Contact:
Direct SAN Access slower than HotAdd
Hello,
I'm experimenting with using Direct SAN Access and not only are my tests not showing better performance, it's actually worse. I'm hoping I can get some clues on what's going on.
Some background information on my environment and tests:
* I have an EqualLogic PS4000X
* I'm backing up three small VMs. They are all thin-provisioned totaling ~35GB used storage.
* The proxy and B&R are on the same Server 2012 VM. Proxy allows 3 concurrent tasks.
* The proxy has 4 vCPUs and 6 GB RAM. Neither of these resources seem to be greatly utilized during the job runs (hence the moderate proxy bottleneck values seen below).
* When the proxy is set to automatically choose the best backup method it chooses SAN so I've had to force it to use HotAdd on some job runs. I did two job runs per method.
* HotAdd jobs averaged 89MB/s. Direct SAN jobs averaged just 65.5MB/s.
With regards to bottlenecks, the job is very short (15 mins) so it's not unreasonable that I would see big variances in bottlenecks estimations. However, here are the stats in chronological order:
1. HotAdd: 24%, 35%, 34%, 70%
2. SAN: 93%, 39%, 10%, 59%
3. SAN: 97%, 37%, 2%, 1%
4. HotAdd: 73%, 52%, 28%, 7%
Any ideas here?
I'm experimenting with using Direct SAN Access and not only are my tests not showing better performance, it's actually worse. I'm hoping I can get some clues on what's going on.
Some background information on my environment and tests:
* I have an EqualLogic PS4000X
* I'm backing up three small VMs. They are all thin-provisioned totaling ~35GB used storage.
* The proxy and B&R are on the same Server 2012 VM. Proxy allows 3 concurrent tasks.
* The proxy has 4 vCPUs and 6 GB RAM. Neither of these resources seem to be greatly utilized during the job runs (hence the moderate proxy bottleneck values seen below).
* When the proxy is set to automatically choose the best backup method it chooses SAN so I've had to force it to use HotAdd on some job runs. I did two job runs per method.
* HotAdd jobs averaged 89MB/s. Direct SAN jobs averaged just 65.5MB/s.
With regards to bottlenecks, the job is very short (15 mins) so it's not unreasonable that I would see big variances in bottlenecks estimations. However, here are the stats in chronological order:
1. HotAdd: 24%, 35%, 34%, 70%
2. SAN: 93%, 39%, 10%, 59%
3. SAN: 97%, 37%, 2%, 1%
4. HotAdd: 73%, 52%, 28%, 7%
Any ideas here?
-- Chris
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
Hello Chris,
That's an iSCSI direct SAN connection, right? Can you try to apply these settings to see if they help or not. BTW, do you have any MPIO software installed on your proxy server?
Thank you!
That's an iSCSI direct SAN connection, right? Can you try to apply these settings to see if they help or not. BTW, do you have any MPIO software installed on your proxy server?
Thank you!
-
- Product Manager
- Posts: 20389
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Direct SAN Access slower than HotAdd
Also, I’m wondering whether the provided statistics are for full or incremental runs. Thanks.
-
- Expert
- Posts: 231
- Liked: 18 times
- Joined: Dec 07, 2009 5:09 pm
- Full Name: Chris
- Contact:
Re: Direct SAN Access slower than HotAdd
Hi Vitaliy,
Yes, iSCSI Direct SAN. I do not have MPIO installed on the proxy server. They were full jobs each run.
I'm looking over that link you provided.
The small job I described was just a test job so that I wouldn't have to wait long per run while testing. Interestingly, this new setup has run for two normal cycles on my production job and both times it chose HotAdd. Why did it choose that over Direct SAN? Ask another way, what criteria did the job not meet that would force it to choose HotAdd over Direct SAN?
Yes, iSCSI Direct SAN. I do not have MPIO installed on the proxy server. They were full jobs each run.
I'm looking over that link you provided.
The small job I described was just a test job so that I wouldn't have to wait long per run while testing. Interestingly, this new setup has run for two normal cycles on my production job and both times it chose HotAdd. Why did it choose that over Direct SAN? Ask another way, what criteria did the job not meet that would force it to choose HotAdd over Direct SAN?
-- Chris
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Direct SAN Access slower than HotAdd
Generally, the following requirements should be met for Direct SAN to be used:
- Software iSCSI initiator is configured correctly.
- SAN volume can be seen by operating system in the Windows Disk Management snap-in on the Veeam Backup server.
- Read access is allowed for the Veeam Backup server computer on the corresponding LUN.
- Software iSCSI initiator is configured correctly.
- SAN volume can be seen by operating system in the Windows Disk Management snap-in on the Veeam Backup server.
- Read access is allowed for the Veeam Backup server computer on the corresponding LUN.
-
- Expert
- Posts: 231
- Liked: 18 times
- Joined: Dec 07, 2009 5:09 pm
- Full Name: Chris
- Contact:
Re: Direct SAN Access slower than HotAdd
Strange. My system meets those requirements and the small test job defaults to SAN but the regular, much large job, defaults to HotAdd. I'm also getting wildly different processing rates and bottleneck values which don't make any sense to me.
I've got two potential storage targets. The fastest storage is a locally attached RAID 5 4-disk array on the host where B&R resides. This is a datastore attached through vSphere. When my normal job writes to it I get ~16MB/s. When my small test job writes to it I get >115MB/s. Then for bottlenecks I get Target 77% and Target 0% respectively. How is that possible? What could cause the target to be drastically slower during a normal job than during the test job?
I think I should open a ticket so someone can take a look at all the pieces to the puzzle.
I've got two potential storage targets. The fastest storage is a locally attached RAID 5 4-disk array on the host where B&R resides. This is a datastore attached through vSphere. When my normal job writes to it I get ~16MB/s. When my small test job writes to it I get >115MB/s. Then for bottlenecks I get Target 77% and Target 0% respectively. How is that possible? What could cause the target to be drastically slower during a normal job than during the test job?
I think I should open a ticket so someone can take a look at all the pieces to the puzzle.
-- Chris
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
Most likely if you force your regular job to SAN mode it will fail, as it definitely should prefer direct SAN access over HotAdd. As to the bottleneck stats, then can you please clarify if you see these bottlenecks for full or incremental job passes?
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Direct SAN Access slower than HotAdd
Hi.
Yes, you should open a ticket for closer investigation.
I suggest contacting both Veeam and Dell for support, allow both of them to try to help.
Additional tip:
Equallogic uses jumbo frames by default.
Have you enabled it end to end? Are you sure?
Are you realy sure?
Yizhar
Yes, you should open a ticket for closer investigation.
I suggest contacting both Veeam and Dell for support, allow both of them to try to help.
Additional tip:
Equallogic uses jumbo frames by default.
Have you enabled it end to end? Are you sure?
Are you realy sure?
Yizhar
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Nov 03, 2014 11:09 pm
- Full Name: Ross
- Contact:
Re: Direct SAN Access slower than HotAdd
Any luck finding an answer. I having the exact same "issue".
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
Hi Ross, what is your bottleneck statistics for the backup job using Direct SAN mode?
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Nov 03, 2014 11:09 pm
- Full Name: Ross
- Contact:
Re: Direct SAN Access slower than HotAdd
I've set up 2 test jobs for the exact same server.Vitaliy S. wrote:Hi Ross, what is your bottleneck statistics for the backup job using Direct SAN mode?
Direct San:
Source 99%
Proxy 18%
Network 3%
Target 0%
Virtual Appliance/Hot Add (the way I have this set up is it goes through the Veeam Server, to the Proxy via the production network, and then to the iscsi attached drive that is mapped to the Proxy Server)
Source 99%
Proxy 31%
Network 12%
Target 0%
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
Hmm... do you have MPIO enabled for Direct SAN connection? What is the actual performance rates for both jobs? How much is the difference?
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Nov 03, 2014 11:09 pm
- Full Name: Ross
- Contact:
Re: Direct SAN Access slower than HotAdd
It's only about 20mb difference. I'm following some of these (http://cscmblog.blogspot.com/2012/11/wi ... ti_13.html) directions now. MPIO is/was not enabled. I'll let you now if I get improvement. I would assume I should see some after I get everything set up properly.Vitaliy S. wrote:Hmm... do you have MPIO enabled for Direct SAN connection? What is the actual performance rates for both jobs? How much is the difference?
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Nov 03, 2014 11:09 pm
- Full Name: Ross
- Contact:
Re: Direct SAN Access slower than HotAdd
Does everybody usually configure two iscsi nics on a VM Proxy even though you don't need it for redundancy (since VMware takes care of that)?
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
Cannot comment on the NICs question, unfortunately, but have you tried these tips to improve the throughput since you're using iSCSI connection?
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Nov 03, 2014 11:09 pm
- Full Name: Ross
- Contact:
Re: Direct SAN Access slower than HotAdd
Vitaliy S. wrote:Cannot comment on the NICs question, unfortunately, but have you tried these tips to improve the throughput since you're using iSCSI connection?
Yes I have. It made no difference. Neither did adding a second nic for iscsi. Direct San is still about 10mb slower.
-
- VP, Product Management
- Posts: 27368
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Direct SAN Access slower than HotAdd
If I were you, then next thing I would do is to compare actual job sessions. If there is no difference in job preparation steps, then it seems like that direct SAN access is, indeed, slower that HotAdd mode. This could happen for various reasons... hardware equipment, switches etc.
Who is online
Users browsing this forum: Bing [Bot] and 161 guests