Host-based backup of VMware vSphere VMs.
Post Reply
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Feature request/question - vRDM feature parity

Post by ChrisGundry »

Please don't merge this topic!

I would like to request that vRDM disks get 1:1 feature parity with VMDK disks. In particular:
1. Restore from storage snapshot
2. Explorer integration, such as SQL Explorer
3. Instant Recovery
4. Restore TO vRDM where the backed up disk was a vRDM, currently restore is to VMDK

I fully appreciate that vRDM is a lesser used disk type, but there are scenarios where it is the recommended type and customers are having to choose between Veeam functionality/support and Software/storage vendor best recommendations.

Obviously I can't tell what level of work would be required from Veeams side to make this possible, or why it has not been done so far. To me it doesn't seem too much work as you already have the code to clone the snapshots etc, which is the more complex part I would assume. To me the only part you are missing for #1-3 is identifying the disk is vRDM and actually attaching it from the cloned snapshot.

If there is no chance of this happening, it would be nice to know why that is the case. I know from looking through the forums that others have asked for it as well, although perhaps not enough.

Thanks

Chris
Gostev
Chief Product Officer
Posts: 31807
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Feature request/question - vRDM feature parity

Post by Gostev »

Thanks for your feedback!

I won't say "no chance" in this case, but this is certainly not in the list of priorities for the next couple of years. And with less and less people using vRDMs every year, it's hard to see this becoming a high priority item even then. Plus, I expect those legacy storage vendors pitching the usage of vRDMs to quickly lose market to modern vendors who don't impose ugly design restrictions on VMware storage infrastructure. Full storage virtualization is just too big of a deal for most customers.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

Thanks for the quick reply Anton.

We would always us VMFS/VMDK whenever possible, and in the future, hopefully vVOLs. However, not only did MS and Nimble both recommend it for our use case, but in testing we were also seeing much greater IOPs when using 4x vRDM vs 4x VMDK on 4 separate VMFS datastores (or 4 on a single VMFS datastore). When we raised this with VMware, they couldn't get anywhere with it and blamed the storage, which as we say we know can provide higher IOPs when using vRDM... Which was very frustrating. Given that MS and Nimble both recommended vRDM for this use case (SQL Always On non-shared storage cluster), and we had much higher performance with vRDM, we felt we didn't have much choice but to go that way. It's just frustrating that it feels like Veeam is 'so close' to having it all there, but yet not.
Gostev
Chief Product Officer
Posts: 31807
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Feature request/question - vRDM feature parity

Post by Gostev »

Ah, no - definitely not close... vRDM is a different beast, so it'll require quite significant investment to support at the same functionality level as VMDK.
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Feature request/question - vRDM feature parity

Post by soncscy »

ChrisGundry wrote: Jan 11, 2021 5:19 pm Given that MS and Nimble both recommended vRDM for this use case
Does either Vendor have documentation on this by chance? A quick look on google for 'nimble vrdm' returns nothing valuable on the first few pages.

I don't deny the benefits of RDM disks (p or v), but seems like a cop-out to me when we're discussing virtualization. Of course RDM disks will __ always__ be more performant than virtualized disks, but at that point, why not just handle the disk physically? It feels like the vendors are basically saying "We can never match performance between virtual and physical, so just do physical".

v vs p is irrelevant if you can't present the performant storage to Vmware in full IMO. Seems like it's the storage vendors trying to work around the Vmware storage stack imo without admitting they don't have a "real" solution for this.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

Gostev wrote: Jan 11, 2021 8:35 pm Ah, no - definitely not close... vRDM is a different beast, so it'll require quite significant investment to support at the same functionality level as VMDK.
Ah ok, it seemed like you would be close. Is part of the issue the way you change everything to be a VMDK as part of the backup process, it isn't currently able to 'handle vRDM' without making it a VMDK?

Can you elaborate on your other comment?
Gostev wrote: Jan 11, 2021 4:47 pm Plus, I expect those legacy storage vendors pitching the usage of vRDMs to quickly lose market to modern vendors who don't impose ugly design restrictions on VMware storage infrastructure. Full storage virtualization is just too big of a deal for most customers.
Nimble are not dictating that we have to use vRDM, but it was their recommendation. In fact their primary recommendation is in guest/physical server with iSCSI. The recommendation for guest iSCSI or vRDM is that they say is is the most performant solution. When I queried for more information (because everything I read said VMFS is 99% as good as vRDM these days), they said it was because VMFS/VMDK introduced more overhead. This is of course true, but as I said, everything I read says it should be 99% as good, which we would have been OK with... However, in testing we found that we couldn't get anywhere near as many IOPs or as consistent latency (at high IOPs) through VMDK/VMFS as we could through vRDM. And when we raised it with VMware they couldn't resolve it, they blamed the underlying storage. Given that the storage is able to delivery way more IO through vRDM, we know the storage is capable of much more, so it's not 'the storage' that is at fault here, more VMware software or configuration. But if VMware are unable to provide a resolution for poor (in comparison) VMDK/VMFS performance, then what are we to do... Perhaps we should have kept pushing VMware, but after several weeks, we were not getting anywhere...

In our case we do not run physical servers and we do not allow guest VMs access to the iSCSI fabric. Whilst we could have looked to make an exception, it would have also then required us to maintain and run the Veeam Agent for guest level backups, as well as the Nimble Connection software within the guest, neither of which was something we were particularly keen on doing. Even if we had done that, I don't know that if would have been in any better place for being able to recover SQL or do IR from storage level snapshots (can in guest disks, snapshotted with Nimble outside of the guest be restored using Veeam and SQL Explorer, IR etc.)? By going with vRDM we were able to stick with using B&R to backup the machine, and keep our policy of not allowing VM guest access to iSCSI, but have lost SQL Explorer, IR etc.

I think I am going to look into using Veeam SQL Explorer stand alone, with storage snapshots being added to the guest as additional vRDMs. If I can add DBs to SQL Explorer, from snapshots and use SQL Explorer to recover tables etc then that would be 80% of what I wanted to achieve with Veeam. This gives our Dev team the ability to do table restores etc from our storage snapshots and other things with the staging server.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

soncscy wrote: Jan 11, 2021 9:10 pm Does either Vendor have documentation on this by chance? A quick look on google for 'nimble vrdm' returns nothing valuable on the first few pages.
Most of the Nimble documentation is within their Infosight portal, which is only available to customers.
soncscy wrote: Jan 11, 2021 9:10 pm I don't deny the benefits of RDM disks (p or v), but seems like a cop-out to me when we're discussing virtualization. Of course RDM disks will __ always__ be more performant than virtualized disks, but at that point, why not just handle the disk physically? It feels like the vendors are basically saying "We can never match performance between virtual and physical, so just do physical".

v vs p is irrelevant if you can't present the performant storage to Vmware in full IMO. Seems like it's the storage vendors trying to work around the Vmware storage stack imo without admitting they don't have a "real" solution for this.
I 100% think that the storage vendor is trying to work around VMware's limitation. It's not the storage vendors job to provide a 'solution' to VMware's VMDK stack not being fast enough. We/they only have so many means for the storage to be delivered to the server right, p/vRDM, VMDK, vVOL, in guest iSCSI, DAS. It is up to us as the customer to chose with one works for us (we chose performance over recovery functionality as we have SQL AO as our primary recovery). But we directed by factors such as performance, functionality, best practice recommendations as well as the critical functionality requirements (if we we wanted to do FOC we would need to use sharable storage, such as pRDM say).

If VMDK/VMFS had shown 90-100% performance of vRDM/in guest iSCSI, then we would have gone that route, it would have been 100x simpler for us! If VMware had been able to identify why VMDK performance was so much lower than vRDM, we would have resolved that (if it was something within our realm) and gone VMDK. But neither of those were possible... To me, all of the documentation saying that VMDK is 99% as performant as vRDM is either nonsense (surely not possible), or there is something wrong in our environment (be it VMware or Nimble), but if VMware, the vendors, can't resolve the issue, then...
Andreas Neufert
VP, Product Management
Posts: 7079
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Feature request/question - vRDM feature parity

Post by Andreas Neufert » 1 person likes this post

This is a very old discussion point that was addressed in this forum at length.

The best official ressources for this are:
https://blogs.vmware.com/vsphere/2013/0 ... s-rdm.html

You will find some links to studies that VMware did in 2007 and 2008 that clearly showed that there is near zero overhead within the VMware stack.
I remember as well the VMware CTO presenting this back then on stage with a multi TB Oracle Server.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

I am well aware that it is supposed to be 99% the same performance, I have mentioned that several times already in this thread. My issue is the fact that in our testing it was NOT 99% the same, it was something like 70%, still very fast, but no where near as fast as it was with vRDM. Again, when we raised this with VMware, they were unable to identify the problem and said to take it up with our storage vendor as it was clearly a storage problem. A pointless excersise because we had already proven that vRDM from Nimble was able to far exceed VMDK from Nimble. As far as Nimble is concerned it is just iSCSI traffic, VMware is the part that matters here, but VMware were unable to resolve and had their head in the sand. We tried escalating it, but after several weeks we hit a stalemate and gave up, resorting to sticking with vRDM. As I said, if we had been able to get 1:1 performance or close to it, then we obviously would have gone with VMDK!
Gostev
Chief Product Officer
Posts: 31807
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Feature request/question - vRDM feature parity

Post by Gostev »

Then it sounds like VMware and HPE just need to keep working on your support case. "Unable to identify the problem" is recognizable as a typical T1 support answer who hesitates to escalate the case to higher tiers, actually this also happens at Veeam more often that I would like... in such cases, you always need to pressure them to escalate the case, to make sure it gets into R&D for investigation. Because there's no performance problem developers cannot identify, as they can just do a performance trace and see exactly how much time each function takes in each storage mode (VMDK vs vRDM).

As per the link Andreas posted, this whole "RDM is faster" topic was officially closed by VMware almost 10 years ago through polishing their storage virtualization stack. As a paying customer of both vendors, you have right to demand to see the stated performance levels in your environment.

If you suspect the issue is on VMware side, it should be able to easily demonstrate this by using fast local ESXi host storage for your test. If you suspect the issue is on the HPE side, get another storage vendor to ship you POC storage to confirm this is the case, and may be these facts will enable you to return your Nimble storage back to HPE (if they are unable to fix the issue).
Gostev
Chief Product Officer
Posts: 31807
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Feature request/question - vRDM feature parity

Post by Gostev »

ChrisGundry wrote: Jan 12, 2021 9:27 amIs part of the issue the way you change everything to be a VMDK as part of the backup process
Actually, we don't. This is a popular misconception, but in fact our VBK backup format stores raw disk images regardless of the data source (VMware/Hyper-V/AHV/physical/cloud). It is at restore time when we convert them to the target system format.

But for example, VMware VDDK support restores to VMDK but not to vRDM, so we would need to create some custom way of performing restore even for this most basic need.

Or take our own Instant Recovery: the whole engine is based on our vPower NFS server publishing VMDKs! Publishing vRDM would require a totally different approach like creating some vPower iSCSI Target. As I've said, it would be a huge undertaking no matter where you look.
Andreas Neufert
VP, Product Management
Posts: 7079
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Feature request/question - vRDM feature parity

Post by Andreas Neufert »

As Anton shared the VDDK is the Software Development Kit where VMware do certifications based on. Doing something outside of this and restore it directly is not a certifiable way within VMware. Potentially our Agent backup approaches can help you ?
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

Gostev wrote: Jan 12, 2021 1:05 pm Actually, we don't. This is a popular misconception, but in fact our VBK backup format stores raw disk images regardless of the data source (VMware/Hyper-V/AHV/physical/cloud). It is at restore time when we convert them to the target system format.
Hmm, that is interesting.
Gostev wrote: Jan 12, 2021 1:05 pm But for example, VMware VDDK support restore to VMDK but not to vRDM, so we would need to create some custom way of performing restore even for this most basic need.

Or take our own Instant Recovery: the whole engine is based on our vPower NFS server publishing VMDK! Publishing vRDM would require a totally different approach like creating some vPower iSCSI Target. As I've said, it would be a huge undertaking no matter where you look.
OK, at least I have a bit more info now as to why it's not been done yet, which I didn't have before.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

Gostev wrote: Jan 12, 2021 12:17 pm Then it sounds like VMware and HPE just need to keep working on your support case. "Unable to identify the problem" is recognizable as a typical T1 support answer who hesitates to escalate the case to higher tiers, actually this also happens at Veeam more often that I would like... in such cases, you always need to pressure them to escalate the case, to make sure it gets into R&D for investigation. Because there's no performance problem developers cannot identify, as they can just do a performance trace and see exactly how much time each function takes in each storage mode (VMDK vs vRDM).

As per the link Andreas posted, this whole "RDM is faster" topic was officially closed by VMware almost 10 years ago through polishing their storage virtualization stack. As a paying customer of both vendors, you have right to demand to see the stated performance levels in your environment.

If you suspect the issue is on VMware side, it should be able to easily demonstrate this by using fast local ESXi host storage for your test. If you suspect the issue is on the HPE side, get another storage vendor to ship you POC storage to confirm this is the case, and may be these facts will enable you to return your Nimble storage back to HPE (if they are unable to fix the issue).
I don't disagree. Unfortunately I don't operate in a world where I am able to spend unlimited amounts of time on fighting with VMware support. I also don't have local fast ESXi storage that is able to outstrip what we can see via VMDK mode, the only way I can do that is with the Nimble via iSCSI. We only have hosts with SD card's in them as all storage is iSCSI. I had some SSDs around, which I did test, but they were no where near capable either. In my world, I have to obtain a solution that will achieve the most possible in the time available and then deploy it, which is what we have done, albeit not to my 100% satisfaction. When I get time I will try and re-visit this with VMware and battle some more to see if we can progress to a more 99% performant VMDK environment.
ChrisGundry
Veteran
Posts: 259
Liked: 40 times
Joined: Aug 26, 2015 2:56 pm
Full Name: Chris Gundry
Contact:

Re: Feature request/question - vRDM feature parity

Post by ChrisGundry »

Ps. I appreciate the updates on why it is not straight forward!
Post Reply

Who is online

Users browsing this forum: No registered users and 61 guests