I have to agree with Eric. Using ports from both VC modules in the same shared uplink set will result in an Active/Passive configuration. We have two shared uplink sets, one for each VC module.
We are also running 3PAR OS 3.2.1 (MU3), along with Patches 17, 18 and 21.
Again, I agree with Eric that the HP Virtual Connect Cookbook is a great source for information. I used this extensively to design and implement our implementation to ensure an active/active configuration and maximum throughput and redundancy.
From your fibre diagram, there are a few things:
1. You have two links from the 3PAR to each fibre switch. So at a minimum your host would see 4 paths to the 3PAR, but your earlier screenshot only shows 2 paths. How are the zones configured on the fibre switches?
2. If you have 4 Nodes (0, 1, 2 and 3) you should have one uplink from each node to each of the fibre switches. So you'd have 4 uplinks to Fibre Switch 1, and 4 to Fibre Switch 2.
3. The best practice for fabric zoning for the 3PAR is not to create a single alias for the 3PAR and add all controller WWNs to it, but to create a separate alias for each controller WWN and then create a zone which includes your host WWN alias and the four aliases for each of the 3PAR controllers. So you end up with:
Fibre/Fabric Switch 1
Blade Server Host - Host HBA Port 1 WWN)
3PAR Node 0 - Controller Node 0 HBA 0 WWN
3PAR Node 1 - Controller Node 1 HBA 0 WWN
3PAR Node 2 - Controller Node 2 HBA 0 WWN
3PAR Node 3 - Controller Node 3 HBA 0 WWN
Host Alias and the 4 3PAR Node Alias
This should be repeated on the second fibre switch, but with the second HBA on the host server and 3PAR controller nodes. There is where you end up with 4 paths to the 3PAR via each HBA port from your blade server, so your blade should see 8 paths.
As I think I mentioned before, if you have 4 nodes in a 3PAR then you have 2 controller shelves in the storage array. One shelf contains Nodes 0 and 1, and the other has 2 and 3. One controller shelf manages half of the disks in the system, and the other controller shelf controls the other half. If you only have zoning from your server to half the 3par controller nodes then you are not accessing the nodes directly that can access all of the disks. If you request data from a disk that is not controlled by the pair of controllers you are zoned to then those controllers have to talk to the other controllers via the 3PAR backplane and request the data from them, which is passed back to the controller your host can see, and then back to your host. I didn't expect this would impact performance too much but my testing shows it has as massive impact.
Overall I'd be doing the following:
1. Create separate shared uplink sets for each VC module
2. Ensure all Shared Uplink sets are active/active
3. Ensure each 3PAR controller node is cabled to each fibre switch
4. Zone each host HBA port to all 4 3PAR controller nodes
5. Ensure you can see 8 paths to the 3PAR