I cannot get the VBR Proxy to choose SAN transport mode, it always uses NBD.
I have been chasing this issue all day for one of my clients and I am stumped!
We have not raised a support case as yet. If I have done everything I needed to in order to get it working, I will ask them to get a case opened tomorrow morning.
Configuration
VBR 9.5 version 9.5.0.711
vSphere v5.5 Build 1892794 ESXi Hosts & Windows vCenter 5.5 Server
1 VBR Backup Server + Database (Virtual Machine)
2 VBR Proxy/Repository servers - both are Physical HP Servers running Windows 2012 R2 (Fully Patched)
Fibre Channel Dual Port HBA in each VBR proxy server
2 Brocade FC Switches (separate fabrics) - Single Initiator Zoning implemented and working as expected
HP MSA 2040 SAN Dual Controller - Both VBR proxy servers have been mapped/presented to the 4 VMFS LUNS (all as READ-ONLY)
MPIO feature installed on both VBR proxy servers with the correct MPIO claim string for the MSA2040 SAN (mpclaim –n –i –d “HP MSA 2040 SAN”). Both servers were rebooted after mpclaim string was added.
DISKPART SAN policy is Offline Shared
DISKPART Automount Disabled
DISKPART Automount Scrub
4 VMFS LUNS are seen is Windows Disk Manager as Basic / Offline / Healthy / Primary Partitions
I created a support log bundle and found this in one of the log files:
Code: Select all
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Trying to detect source proxy modes for VM [TESTVM], forceNbd:False, has snapshots:False, disk types:[Scsi: True, Sata: False, Ide: False]
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Host storage 'VMware ESXi 5.5.0 build-1892794' has 6 luns
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Searching for VMFS LUN and volumes of the following datastores: ['SAS_R50_3 on MSA2040']
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Datastore 'SAS_R50_3 on MSA2040' lun was found, it uses vmfs
[16.02.2017 16:40:16] <01> Info [ProxyDetector] VMFS LUNs: ['HP Fibre Channel Disk (naa.600c0ff0001b************01000000)'], NAS volumes: <no>
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Detecting storage access level for proxy [vbrproxy-01.fqdn]
[16.02.2017 16:40:16] <01> Info Proxy [vbrproxy-01.fqdn] - is in the same subnet as host [VMware ESXi 5.5.0 build-1892794]
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Proxy [vbrproxy-01.fqdn]: Detecting san access level
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Proxy [vbrproxy-01.fqdn]: disks ['600508B1001C****************************4341','600508B1001C****************************4341']
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Proxy [vbrproxy-01.fqdn]: No disks are are accessible through san but can failover to network
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Detected san storage access level for proxy [vbrproxy-01.fqdn] - [SameSubnetwork]
[16.02.2017 16:40:16] <01> Info [ProxyDetector] Detected mode [nbd] for proxy [vbrproxy-01.fqdn]
- What have I missed?
Is there a specific MPIO load balancing policy I need to use?
Any issues as I have spaces in VMFS datastore name?
The FC SAN/MPIO/Presentation etc was configured AFTER the VBR proxy role was installed, do I need to reconfigure anything (They have been running as NBD transport for a while).
Can I use the vcbSanDbg.exe diagnostic with this setup? (All VMFS are v5) or is there an updated SAN diag tool I can use?
Thanks
M