by unsichtbarre » Wed Jun 05, 2013 10:01 pm people like this post
I have an inbound replication job with about 25 VMs (8TB). The datastore (VMFS 5) selected for the job is on a SAN that needs maintenance. I have created a new datastore (VMFS 5) of appropriate size on a different SAN, now I would like to move the VMname_replica to the new datastore. What's going to be involved?
by pac001 » Mon Aug 15, 2016 4:47 am people like this post
I've looked for the answer through this forum and can't find it.
I have an ESX 5 DR host holding a number of guest replicas, located on the end of a slow WAN link. I've added a larger datastore to the host and now want to move those replicas to the larger datastore.
What is the correct process to move the replicas to the new datastore so that the replication jobs continue to recognise the moved replicas, and continue replication normally?
by Vitaliy S. » Mon Aug 15, 2016 9:21 am people like this post
Hi Peter, if VM moref ID changes during this operation, then you will need to re-map your replication job to the new target. Once you do this, your job will continue running incremental job pass. Please note that re-mapping operations removes all existing restore points (snapshots) from target replica, as it is not possible to properly map a VM to the snapshot.
by pac001 » Tue Aug 16, 2016 1:29 am people like this post
Not the answer i'm looking for.
I've tried migrating a (not running) replica to a different datastore, then changing the datastore target in the veeam replication job, but that didn't work. The moved replica wasn't updated with a new snapshot.
instead the replication job created a new replica "..._replica (2)"
by pac001 » Wed Aug 17, 2016 4:08 am people like this post
I've think I've figured out what the problem is. The replication jobs were created a while ago when the DR host stand alone and not managed by vcenter. So the replication jobs connect directly to the DR host for the destination instead of the cluster. Migrating these replicas around seems to throw Veeam off.
When I set up a new test replication job navigating to the source and destination hosts through the cluster, I can then vmotion the replica around and then just update the destination in the job with no problems.
using the replica mapping option seems to work ok. I haven't noticed any snapshots being removed while using it.