-
- Expert
- Posts: 116
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
SCSI Bus Sharing strategies
So we're working on a project where we're migrating an existing production app server from single node to multi-node (W2k8 R2). The vendor supports multi-node on the application level, the only stated requirement is a data store on a shared filesystem. Uh oh. Obviously there are many VMW restrictions and limitations when using SCSI Bus Sharing, perhaps the most critical being the inability to take snapshots, and therefore inability to use Veeam Backup (AFAIK). While most of our environment has been virtualized for years, this is our first cluster that will be, and I'm wary of losing the flexible security blanket of VBR.
Folks who manage VM clusters with shared disk: what's your strategy? Have you found creative ways to still use Veeam for backup & restore? Do you revert to traditional agent-based backups for these systems or some kind of hybrid?
Thanks for any advice!
Folks who manage VM clusters with shared disk: what's your strategy? Have you found creative ways to still use Veeam for backup & restore? Do you revert to traditional agent-based backups for these systems or some kind of hybrid?
Thanks for any advice!
-
- VP, Product Management
- Posts: 6035
- Liked: 2863 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: SCSI Bus Sharing strategies
I don't really know anything about W2K8 R2 clustering, but I've done some Oracle RAC clustering under Linux, and tested some generic Redhat clustering running in VM's and what we do is access the shared storage via the software iSCSI from within the guest VM's, that way we can still use VMware snapshots for the VM itself and then a combination of SAN based snapshots or other native tools (like Oracle RMAN) to backup the data on the shared storage.
-
- Expert
- Posts: 116
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: SCSI Bus Sharing strategies
Thanks Tom, that's exactly what I've started to play with in lab, getting resigned to the fact that shared disk just isn't worth it from a VMW management standpoint.
Our Clariion is using FC, but it does also have iSCSI capability which we could put to use. Right now I'm configuring a W2k8 iSCSI target VM (MS iSCSI Target 3.3 or StarWind), interested in seeing performance of iSCSI to VMDK for ultimate flexibility. Of course iSCSI within the guest won't have ESX(i)-native MPIO, but hopefully native initiators can get the job done without additional third-party cost. Also, I'm testing Melio FS instead of Microsoft Failover Cluster, so that should also free us of some limitations.
Thanks
Our Clariion is using FC, but it does also have iSCSI capability which we could put to use. Right now I'm configuring a W2k8 iSCSI target VM (MS iSCSI Target 3.3 or StarWind), interested in seeing performance of iSCSI to VMDK for ultimate flexibility. Of course iSCSI within the guest won't have ESX(i)-native MPIO, but hopefully native initiators can get the job done without additional third-party cost. Also, I'm testing Melio FS instead of Microsoft Failover Cluster, so that should also free us of some limitations.
Thanks
-
- VP, Product Management
- Posts: 6035
- Liked: 2863 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: SCSI Bus Sharing strategies
Our setup is Equallogic, so "it's the same, but different". We found iSCSI to generally be good enough and provide the most flexibility for this type of setup. We would configure our VM's with two virtual NIC's dedicated to the software iSCSI initiators, and then used Equallogic HIT kit to to provide native multipath within the guest OS. This would give us around 200MB/sec throughput, and quite excellent latency (actually less than going through the VMware I/O stack, at least with ESX 3.5 and 4.0 -- haven't tested with 4.1).
I really wish VMware would quit messing around with useless features and start providing some nice features exactly for things like this. How about some virtual shared storage and interconnects that are certified with the major clustering players (think, a dedicated VMFS volume with virtual FC interconnect between VM's). You really want to claim your product is some sort of "cloud infrastructure" you should be providing some cloud level features.
I really wish VMware would quit messing around with useless features and start providing some nice features exactly for things like this. How about some virtual shared storage and interconnects that are certified with the major clustering players (think, a dedicated VMFS volume with virtual FC interconnect between VM's). You really want to claim your product is some sort of "cloud infrastructure" you should be providing some cloud level features.
-
- Expert
- Posts: 116
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: SCSI Bus Sharing strategies
Does EqualLogic include HIT licenses with their SANs? Because EMC Powerpath is exorbitantly expensive (like most things EMC) and NOT included. I'm hoping to be able to get away with just the built-in MPIO of the software iSCSI initiator. Have you tried MPIO with just the software iSCSI initiators?
EDIT: And I completely agree, VMW has got to come up with better native features for this; we shouldn't have to cobble together complex workarounds for such a common high-availability scenario. Cloud indeed.
EDIT: And I completely agree, VMW has got to come up with better native features for this; we shouldn't have to cobble together complex workarounds for such a common high-availability scenario. Cloud indeed.
-
- VP, Product Management
- Posts: 6035
- Liked: 2863 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: SCSI Bus Sharing strategies
Equallogic HIT is included (EQL includes all software features with their product). Unfortunately my EMC knowledge is pretty aged at this point (my last EMC equipment was a CX700) but I thought they had released a very basic PowerPath product that was fairly inexpensive. I don't know whether the built in MPIO will work, but I think it support pretty much any system that is either Active/Active or ALUA complaint. I think ALUA is equivalent to "Failover Mode 4" in Clariion parlance.
Who is online
Users browsing this forum: Majestic-12 [Bot] and 120 guests