Comprehensive data protection for all workloads
Post Reply
jraymond
Novice
Posts: 6
Liked: 3 times
Joined: Mar 08, 2019 1:31 pm
Full Name: Jay Raymond
Contact:

Direct SAN Access Networking help

Post by jraymond »

Ok this is going to be a little long,

I will try to lay this out a simply as I can. I am beyond frustrated. I really need to get this to work.

Veeam 9.4 (U4)
Dell Poweredge 730XD
2 480GB SSD drives in a RAID 1
10 10TB SAS Drives in a RAID 5 80TB RAW 62TB supported for single drive in VMWare 6.7
VMWare 6.7 latest update
All hardware/BIOS/Firmware/Drivers are updated to latest on the server

The 730XD was a VMWare host we decommissioned and replaced with a new 640. Did this with all 3 hosts. The 730XD was the newest and we reupped the warranty for the final 3 years.
Set up the 730XD on the Dual SD cards with the newest iteration of ESXi
Ordered and added all the drives listed above
500GB RAID is for the single VM we are running (Server2019) as a backup server
62TB RAID 5 for Backup Data storage (largest drive supported by VMware)
Server 2019 VM set up with Veeam 9.5 (U4) - Not on the domain intentionally, single NIC to the LAN for management, running on VMXnet3.
VMWare is all configured and working as intended, datastores from SAN are seen by VMWare with no issue. 6 NIC's configured for iSCSI network 192.168.200.x connected redundant to dual iSCSI switches

I am attempting to do Direct SAN Access transport from Veeam.
I have followed 10+ articles about setting it up with the MS iSCS initiator
In NONE of them does it describe the networking setup if it is on a different subnet. LAN is 10.10.1.x iSCSI is 192.168.200.x
I have configured a NIC in the server for LAN access as one would usually do
I have configured a second NIC for the iSCSi network with a 192.168.200.76 address, all iSCSI NIC's on the host are in the .70 range (No gateway on this NIC)
Jumbo Packet is set on the NIC to "Jumbo 9000"
I cannot pass traffic to the iSCSI network from the server at all.
I have added a ROUTE statement for all 192.168.200.x traffic to use the interface of 192.168.200.76
I have added (from an article) a NIC team with just the iSCSI interface to configure the VLAN
I have added and tested and removed and tested having the VLAN 11 (iSCSI VLAN on the dell switches) and no ping across in either instance
I have adjusted the Metric from 1 up to 100 with no change
I have run a tracert and found it's first hop is the WAN interface and I cannot understand why.
I am just confused at to why every article just says set up the MS iSCSI initiator to search for the target and "BAM!" you're there and can do your backups now.
The point of doing the Direct SAN Access is to keep the traffic off the LAN, this using the iSCSI network only. I am direct into the iSCSI switches and have provided network access to the network on the server but I cannot see anything. I'm am quite sure there is something really stupid I am missing. I can see the iSCSI network and SAN with no issue from the host, I have quadruple verified networking on the host in VMWare is set to best practices.
I have opened a ticket with Veeam but I'm really not thinking this is their issue, I was more hoping they would know how the networking needs to be set.
It's not a VMWare issue as I can see everything on the iSCSI network from there.
I figured before I call Microsoft, I would ask all the smarties in here.

Can anyone tell me what I am doing wrong? I have never had this kind of issue and I am pulling the little bit of hair in my head out entirely too quickly.

Thank you so much!!
Jay
HannesK
Product Manager
Posts: 14316
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Direct SAN Access Networking help

Post by HannesK »

Hello,
having different subnets is not the problem. As you cannot ping it is a "general network" issue. Before configuring any iSCSI settings, make sure that ping works.

- if you have a second network card directly in the iSCSI network, then you don't need a "route" statement as the subnet is "directly connected"
- if you use jumbo frames: this must match. MTU must be the same
- you don't need nic-teaming. a single network card is fine - keep it simple

Best regards,
Hannes
jraymond
Novice
Posts: 6
Liked: 3 times
Joined: Mar 08, 2019 1:31 pm
Full Name: Jay Raymond
Contact:

Re: Direct SAN Access Networking help

Post by jraymond »

Thank you for your reply, sadly I have done all of this and I can't get why I can't get to that network. I guess I will just have to keep digging, I can't find any answers anywhere. The networking is sound, so it makes no sense why I can't get on that network.

Thanks!
jraymond
Novice
Posts: 6
Liked: 3 times
Joined: Mar 08, 2019 1:31 pm
Full Name: Jay Raymond
Contact:

Re: Direct SAN Access Networking help

Post by jraymond » 1 person likes this post

Ok so I found the issue. I can say that it was very hard finding info on how to make this happen, but it turns out it has nothing to do with Veeam. I figured that, but I couldn't find an answer. For those searching and dealing with the same problem, hopefully this can be your saving grace!!

Configure your VMWare as you normally would. All your iSCSI ports on a vSwitch and your port groups and the such. However, there is one little piece you need that I missed.

You must create an additional Port Group. I named mine VM iSCSI Network. You MUST add this Port Group to the vSwitch that has your other iSCSI port groups, VMK's and VM Network. In my case it was vSwitch0. Then you add redundant uplinks using the iSCSI VMNIC's. In my case I have 6 VMNIC's on the server set for iSCSI addresses. I used three of them for the VM iSCSI Network Port Group and the other three for VMWare to see the SAN. Basically giving me even paths.

On the VM in VMWare add another VMXNET3 network card. Then on the server you configure a new NIC and give it an IP of the iSCSI network, subnet, and no gateway. Then configure MPIO like all the other links on the web tell you to do. Lastly configure the MS iSCSI initiator. When I went through the connections I made sure to make it default to the NIC that resides on the iSCSI network, just to be safe. When I let it go on it's own it seemed to use both NIC's for some reason. I was looking to keep all the backup traffic off my LAN. Forcing the connections to the iSCSI NIC keeps it all on that network.

This is a great setup. I love the Direct SAN Access transport. I am now running about 120MBs on my backups in Veeam. I have the server set to accept 6 tasks from Veeam. Even my biggest server at 3.4TB took only 7 hours and 35 minutes to run a band new Full backup. I currently have 3 jobs running and am still at 120MBs it's just across the three jobs. My bottleneck is my 6 year old SAN. It shows the source at 99%, the network is at 2%, the proxy at 34%, and the target is at 0%. I am very happy with this setup.

If this is something you are looking to run, it's amazing.
Next step? Setting up iLAND! So excited!!!

Jay
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Google [Bot] and 93 guests