- Veeam Software
- Posts: 5650
- Liked: 1587 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
Can you take a look at the failback job and see the different steps it's doing, and see their times?
Principal EMEA Cloud Architect @ Veeam Software
vExpert 2011 -> 2019
Veeam VMCE #1
- Posts: 206
- Liked: 35 times
- Joined: Feb 20, 2012 4:13 pm
- Full Name: Nick Mahlitz
All is good, the production VM is now on and looks good, after 30 mins of testing I can confirm server is good and data intact and up-to-date. I have committed the failback.
Now I am doing a full backup...as ALL my backups were corrupt on this VM it was the replica that saved it...this success of Veeam will feature in our staff newsletter!!!
We have a 10 meg link to our DR site and it is a 500GB VM...hence the times involved...glad to say all went well!!! But as you say, why did the last part of the replication take so long?
I had throttling enabled between 9am-6pm weekdays but at weekends Veeam can have the whole pipe...but it looked like throttling was still being applied...I'll look into that...
Thanks to all who replied on here! Not sure if Veeam would be interested in a copy of our staff newsletter if it gets a mention?
- SVP, Product Management
- Posts: 24092
- Liked: 3280 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
Absolutely! This would certainly make a great internal case study for us!homerjnick wrote:Thanks to all who replied on here! Not sure if Veeam would be interested in a copy of our staff newsletter if it gets a mention?
Please forward it to me once out (email is my forum nick at veeam.com).
Thanks and congratulations on building a DR strategy that worked in need!
- VP, Product Management
- Posts: 5310
- Liked: 2162 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
I think there's some room for continued enhancement here. Ideally it would work the way I initially described, effectively, create a "failback restore point", replicate all changes, the finally have a point where you failback with only the most recent changes, perhaps even continuing this cycle until the amount of data is below a threashold to keep the failback time to a minimum. Of course, you can always do this with a manual replication job, with replica mapping, back the other direction rather than performing a failback.
Still, good to know that everything worked for you.
Users browsing this forum: No registered users and 9 guests