-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Any way to get multiple NIC iSCSI SAN connections?
Do I love Veeam or what?!
So…with all of the great multi-processing of VMs in B&R v.7, I'm able to fully saturate 1 iSCSI connection to our HP SAN and I'm looking for any creative ideas (outside of moving to 10Gb) to increase our speed. I saw one thread talking about adding routes to the host file
*Note* HP strictly warns against utilizing their DSM for MPIO on a Windows server running a backup utility accessing vmfs LUNs, so that's out.
Setup:
Physical backup server: 2 LAN 1Gb NICs / 2 SAN 1Gb NICs (Broadcom)
SAN switching: 2 Dell PowerConnect 6224s, stacked, but with no 10Gb expansion cards
SAN: 6 nodes of HP P4200 (2 G1 & 4 G2), each one has a 2Gb ALB connection from its two NICs
Thoughts? I didn't know if there might be some way to take advantage of the multi-processes and different VMs being on different LUNs and therefore accessed via different IP addresses (HP LeftHand assigns one node for all I/O for a LUN…we currently have 6 LUNS, so the HP CMC distributes them). Could a VM-based Proxy access things faster than Direct SAN access?
Thanks!
So…with all of the great multi-processing of VMs in B&R v.7, I'm able to fully saturate 1 iSCSI connection to our HP SAN and I'm looking for any creative ideas (outside of moving to 10Gb) to increase our speed. I saw one thread talking about adding routes to the host file
*Note* HP strictly warns against utilizing their DSM for MPIO on a Windows server running a backup utility accessing vmfs LUNs, so that's out.
Setup:
Physical backup server: 2 LAN 1Gb NICs / 2 SAN 1Gb NICs (Broadcom)
SAN switching: 2 Dell PowerConnect 6224s, stacked, but with no 10Gb expansion cards
SAN: 6 nodes of HP P4200 (2 G1 & 4 G2), each one has a 2Gb ALB connection from its two NICs
Thoughts? I didn't know if there might be some way to take advantage of the multi-processes and different VMs being on different LUNs and therefore accessed via different IP addresses (HP LeftHand assigns one node for all I/O for a LUN…we currently have 6 LUNS, so the HP CMC distributes them). Could a VM-based Proxy access things faster than Direct SAN access?
Thanks!
-
- Veteran
- Posts: 1531
- Liked: 226 times
- Joined: Jul 21, 2010 9:47 am
- Full Name: Chris Dearden
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Are you using lefthand snapshots as well (if you have ent+)
Because we don't work directly against the live volume, I wonder if you could use mpio in that case?
Because we don't work directly against the live volume, I wonder if you could use mpio in that case?
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
I am trying out the Storage Snapshot features…that part is pretty stinking cool b/c of the reduction in vSphere snap removal time!
I did see that every SAN snap does have it's own iSCSI connection that are getting distributed across multiple nodes (I'm doing a test backup job that has VMs on 3 LUNs).
It seems to me that MPIO config is all or nothing. The Windows 2008 R2 server that Veeam B&R is on only has read access on the SAN, but the concern is that when the Windows host connects to a volume, it can lock ESXi out of it.
That sounds scary!
I did see that every SAN snap does have it's own iSCSI connection that are getting distributed across multiple nodes (I'm doing a test backup job that has VMs on 3 LUNs).
It seems to me that MPIO config is all or nothing. The Windows 2008 R2 server that Veeam B&R is on only has read access on the SAN, but the concern is that when the Windows host connects to a volume, it can lock ESXi out of it.
That sounds scary!
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
What if you just don't use their DSM and instead run the native MPIO straight up. It's simple round-robin load balancing, but in my lab setup it works great with the HP VSA and I'm able to push >200MB/s over the dual 1Gb links.
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
I see where I can do that per volume...I wonder how it handles the snaps.
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Yup, the storage snaps are connected with one NIC...and the target name changes each job.
Just to make sure I had the Windows MPIO setup correctly, I ran the job without storage snapshots enabled and it doubled my MB/sec since the source was my bottleneck and MPIO works on the main volumes.
Just to make sure I had the Windows MPIO setup correctly, I ran the job without storage snapshots enabled and it doubled my MB/sec since the source was my bottleneck and MPIO works on the main volumes.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Can you clarify if it's only connected with one NIC, or it only uses one NIC because it's not defaulting to MPIO? I think we just rescan the existing SCSI target which should find all paths, and you can change the default path policy with mpclaim.exe to do RR.
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Both NICs are connected to the SAN, but it doesn't default to MPIO…I seem to have set it up per volume that's connected…if anyone has a better tutorial for WinServ2008R2 for iSCSI MPIO, I'm all ears!
Thanks, Jim.
Thanks, Jim.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
As stated in the post above, you can use mpclaim.exe to change the default policy, both on what's claimed by MPIO and the default MPIO policy:
http://technet.microsoft.com/en-us/libr ... s.10).aspx
I'm not setting in front of a system to test this right now, but I believe something like the following should do it:
The first line sets the default MPIO global policy, and the second line tells MPIO to claim all discovered devices. What I can't remember is, even after these commands, does it claim and enabled new devices, or do they have to be run each time. Even if they have to be run each time you can probably create a task that runs them every minute or something via task manager. We I eventually get back to my lab (probably not until next week) I'll try to test this more completely.
http://technet.microsoft.com/en-us/libr ... s.10).aspx
I'm not setting in front of a system to test this right now, but I believe something like the following should do it:
Code: Select all
mpclaim.exe –L –M 2
mpclaim.exe -r -i -a ""
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
And just for reference on the HP DSM issue on a Windows box with access to vSphere volumes:
http://kb.vmware.com/selfservice/micros ... Id=1030129
The cool part about the HP DSM is that it actually allows multiple iSCSI connections for the same volume to different nodes so that you can actually exceed one node's bonded 2Gb speed. While splitting up VMs and balancing their workloads across multiple LUNs definitely helps (that's how I have it setup on the vSphere side…each host has 4 SAN NICs), in situations like this it's just not the same!
http://kb.vmware.com/selfservice/micros ... Id=1030129
The cool part about the HP DSM is that it actually allows multiple iSCSI connections for the same volume to different nodes so that you can actually exceed one node's bonded 2Gb speed. While splitting up VMs and balancing their workloads across multiple LUNs definitely helps (that's how I have it setup on the vSphere side…each host has 4 SAN NICs), in situations like this it's just not the same!
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
I must say I enjoy learning how all of this works together.
What I can confirm now:
My backup server currently has NO iSCSI sessions to the SAN for the production vmfs volumes because I set its permissions to NO ACCESS in the HP CMC. Veeam can still see all of the SAN infrastructure because it can talk to the HP management. When I ran the test backup job WITH Storage Snapshots enabled, the auto-connected iSCSI sessions for those storage snapshots that Veeam initiates have READ access and work!
In theory, it seems like I should be able to now use the HP DSM since I'm never touching the production LUNs. What say ye???
What I can confirm now:
My backup server currently has NO iSCSI sessions to the SAN for the production vmfs volumes because I set its permissions to NO ACCESS in the HP CMC. Veeam can still see all of the SAN infrastructure because it can talk to the HP management. When I ran the test backup job WITH Storage Snapshots enabled, the auto-connected iSCSI sessions for those storage snapshots that Veeam initiates have READ access and work!
In theory, it seems like I should be able to now use the HP DSM since I'm never touching the production LUNs. What say ye???
-
- Enthusiast
- Posts: 58
- Liked: 9 times
- Joined: Mar 12, 2012 8:18 pm
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Well, despite being able to get the HP DSM working on test NTFS volume and then after some head-banging, HP DSM round-robin across both SAN NICs (For whatever reason, it auto-adds all the DSM connections on the first connection, and then at the second it put's the second NIC as a standby NIC).
But…however Veeam is initiating the iSCSI connections, it doesn't seem to pay attention to either the HP DSM availability, let alone round-robin the 2nd NIC.
I did execute the command prompts for mpclaim and really try to read through all that documentation.
I'm stumped!
But…however Veeam is initiating the iSCSI connections, it doesn't seem to pay attention to either the HP DSM availability, let alone round-robin the 2nd NIC.
I did execute the command prompts for mpclaim and really try to read through all that documentation.
I'm stumped!
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Any way to get multiple NIC iSCSI SAN connections?
Hi.
* If you really need more throughput you can consider deploying additional Veeam proxy (one or more).
That proxy will be able to use either hotadd or SAN mode, depending if you add a SAN connection to it.
This can add to your total bandwidth as you will be able to distribute processing load onto several servers in parallel (both the physical Veeam server with additional virtual proxy).
However there are some potential downsides, such as:
- improving backup performance will require more load on SAN storage. Even with a robust SAN it might have effect on other tasks.
Remember that even when you run backups at night, some tasks are run in the background.
Remember that sometimes you run backup/replica during working hours.
So ask yourself if the benefit of faster backup worth more load on SAN.
- A virtual proxy will generate load on Esxi resources, such as CPU, RAM, NIC to SAN, NIC to LAN, and the SAN itself.
While the physical server will burden only the SAN.
* You can also consider an additional PHYSICAL windows proxy - another host with atleast 4 cpu cores, and access to SAN.
This host doesn't need its own storage - it can write to the same repository as the "main" backup server.
* Regardless of the exact topic, please make sure that you have "flow control" enabled in all SAN switches ports.
Yizhar
* If you really need more throughput you can consider deploying additional Veeam proxy (one or more).
That proxy will be able to use either hotadd or SAN mode, depending if you add a SAN connection to it.
This can add to your total bandwidth as you will be able to distribute processing load onto several servers in parallel (both the physical Veeam server with additional virtual proxy).
However there are some potential downsides, such as:
- improving backup performance will require more load on SAN storage. Even with a robust SAN it might have effect on other tasks.
Remember that even when you run backups at night, some tasks are run in the background.
Remember that sometimes you run backup/replica during working hours.
So ask yourself if the benefit of faster backup worth more load on SAN.
- A virtual proxy will generate load on Esxi resources, such as CPU, RAM, NIC to SAN, NIC to LAN, and the SAN itself.
While the physical server will burden only the SAN.
* You can also consider an additional PHYSICAL windows proxy - another host with atleast 4 cpu cores, and access to SAN.
This host doesn't need its own storage - it can write to the same repository as the "main" backup server.
* Regardless of the exact topic, please make sure that you have "flow control" enabled in all SAN switches ports.
Yizhar
Who is online
Users browsing this forum: AdsBot [Google], Baidu [Spider], s.arzimanli, Semrush [Bot] and 97 guests