Host-based backup of VMware vSphere VMs.
Post Reply
UBX_Cloud_Steve
Service Provider
Posts: 32
Liked: 5 times
Joined: Nov 22, 2015 5:15 am
Full Name: UBX_Cloud_Steve
Contact:

DR failback still a problem with v12

Post by UBX_Cloud_Steve »

Hello R&D,

I was hoping we would see improvements with disaster recover failback operations with the new V12 release but it doesn't seem to be in our case.

I just did some tests. With VMWare replication and CDP proxy setup at both source and destination “Calculating Original Signature Hard Disk” takes a tremendous amount of time.

The best we could achieve in this test was: 3.5 hours per 500 GB of data.

Source and destination are Pure Storage NVME arrays. Low latency, high mixed IOPs capable with read and write speeds above 2GB/s (or 16 Gbps)

Between the source and destinate site we have a single 100 Mbps layer 2 connection. During the failback the link its saturated at 95%.

We tried straight replication with and without CDP, and with or without cloud connect. All of them had similar results. So either something is missing or misconfigured or the fail back mechanism is has a design flaw.

Questions:

Would having a WAN accellerator in place and using traditional replication jobs (non-CDP jobs) help with digest calculations?

Does adding more cache to the Replication source and destination proxy help in this case? If so how much should be added?

The option of "quick rollover" from as I understand it, no longer works with newer versions of VMWare vSphere & ESXi so that is not an option any longer. Is this something that is every going to come back? From what I remember it worked very well and its a shame its gone now.

Here is a bit of solid gold advice from someone who has spent the majority of my professional 20 year career using, advocating, and loving your software. Impliment the same installable CBT driver that is in the Veeam Agent for Windows (licensed edition) in to your VMWare replication jobs as an option. This should allow you to create a understandable bitmap of all replica disks without any dependancy of VMWare CBT that is upstream on the hypervisor level. Since its at the OS layer and not Hypervisor layer you don't have to rely on VMWare playing a "gypsy shell game with API changes". Install the CBT driver, travk the changes yourself, and use that knowledge to make the failback process work like it should in 2023. If not let me know because I'm about to break a 12 pack of Diet Mt. Dew, a 2003 copy of .NET for begineers, and start writing this myself.

________
Steven Panovski
UBX Cloud
________
Steven Panovski
UBX Cloud
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: DR failback still a problem with v12

Post by Mildur »

Hi Steven

I recommend to open a support case and let them check the logs. The logs will show if there is something unexpected with your deployment.
Between the source and destinate site we have a single 100 Mbps layer 2 connection. During the failback the link its saturated at 95%.
Do you see this consumption while the digest calculation is done or when the data is transferred?
Would having a WAN accellerator in place and using traditional replication jobs (non-CDP jobs) help with digest calculations?
If the issue is the digest calculation, I don't expect a WAN accelerator to optimize the performance of the calculation.
The option of "quick rollover" from as I understand it, no longer works with newer versions of VMWare vSphere & ESXi so that is not an option any longer. Is this something that is every going to come back? From what I remember it worked very well and its a shame its gone now.
May I ask, what "quick rollover" is? I know Quick Rollback, which is a Veeam feature. And this is still possible in all vSphere versions we support.

Best,
Fabian
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 74 guests