-
- Novice
- Posts: 5
- Liked: never
- Joined: Apr 06, 2009 11:12 am
- Location: uk
- Contact:
V6 Replication disk finalizing
We have been running replication from multiple sites to a single backup site for a number of weeks without any real issues.
We currently have 3 snapshots stored within the replication job to a esx host.
However, I have noticed over the last few days that disk finalizing is increasing in some cases lasting as long as the actual replication job. I have checked the logs and nothing has jumped out.
Anyone advise of similar issues or can point me in the right direction to troubleshoot??
The job is still running successfully.
We currently have 3 snapshots stored within the replication job to a esx host.
However, I have noticed over the last few days that disk finalizing is increasing in some cases lasting as long as the actual replication job. I have checked the logs and nothing has jumped out.
Anyone advise of similar issues or can point me in the right direction to troubleshoot??
The job is still running successfully.
-
- VP, Product Management
- Posts: 27291
- Liked: 2773 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: V6 Replication disk finalizing
Hello Lee,
I'm not quite sure what you're referring to by saying "disk finalizing is increasing", could you please clarify?
Thanks.
I'm not quite sure what you're referring to by saying "disk finalizing is increasing", could you please clarify?
Thanks.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jul 14, 2011 6:17 am
- Full Name: Bruno fleury
- Contact:
Re: V6 Replication disk finalizing
Same here. One Veeam Server 6 and Replication process between 2 SANs.
2/11/2012 5:02:05 AM :: Queued for processing at 2/11/2012 5:02:05 AM
2/11/2012 5:02:05 AM :: Required resources have been assigned
2/11/2012 5:02:36 AM :: VM processing started at 2/11/2012 5:02:07 AM
2/11/2012 5:02:36 AM :: VM size: 740.0 GB (643.9 GB used)
2/11/2012 5:03:37 AM :: Using source proxy VMware Backup Proxy [san]
2/11/2012 5:04:12 AM :: Using target proxy VMware Backup Proxy [nbd]
2/11/2012 5:04:13 AM :: Discovering replica VM
2/11/2012 5:04:13 AM :: Preparing replica VM
2/11/2012 5:05:11 AM :: Creating snapshot
2/11/2012 5:05:48 AM :: Processing configuration
2/11/2012 5:07:43 AM :: Creating helper snapshot
2/11/2012 5:07:58 AM :: Hard Disk 1 (60.0 GB)
2/11/2012 5:13:24 AM :: Hard Disk 2 (480.0 GB)
2/11/2012 6:32:42 AM :: Hard Disk 3 (200.0 GB)
2/11/2012 6:42:05 AM :: Deleting helper snapshot
2/11/2012 6:42:50 AM :: Removing snapshot
2/11/2012 6:44:18 AM :: Swap file blocks skipped: 5.3 GB
2/11/2012 6:44:18 AM :: Finalizing
2/11/2012 6:44:19 AM :: 1 restore point removed by retention policy
2/13/2012 5:05:09 AM :: Busy: Source 83% > Proxy 62% > Network 32% > Target 63%
2/13/2012 5:05:09 AM :: Primary bottleneck: Source
2/13/2012 5:05:09 AM :: Processing finished at 2/13/2012 5:05:09 AM
Finalizing took 46:20:50, when the replication process took a total of 01:30:00 for the 3 disks.
If It could help, This is the only VMs experiencing the issue.
the vmdk disk of the VMs on both source and Destination are split each over 2 VMFS
Anything that I can tune ?
2/11/2012 5:02:05 AM :: Queued for processing at 2/11/2012 5:02:05 AM
2/11/2012 5:02:05 AM :: Required resources have been assigned
2/11/2012 5:02:36 AM :: VM processing started at 2/11/2012 5:02:07 AM
2/11/2012 5:02:36 AM :: VM size: 740.0 GB (643.9 GB used)
2/11/2012 5:03:37 AM :: Using source proxy VMware Backup Proxy [san]
2/11/2012 5:04:12 AM :: Using target proxy VMware Backup Proxy [nbd]
2/11/2012 5:04:13 AM :: Discovering replica VM
2/11/2012 5:04:13 AM :: Preparing replica VM
2/11/2012 5:05:11 AM :: Creating snapshot
2/11/2012 5:05:48 AM :: Processing configuration
2/11/2012 5:07:43 AM :: Creating helper snapshot
2/11/2012 5:07:58 AM :: Hard Disk 1 (60.0 GB)
2/11/2012 5:13:24 AM :: Hard Disk 2 (480.0 GB)
2/11/2012 6:32:42 AM :: Hard Disk 3 (200.0 GB)
2/11/2012 6:42:05 AM :: Deleting helper snapshot
2/11/2012 6:42:50 AM :: Removing snapshot
2/11/2012 6:44:18 AM :: Swap file blocks skipped: 5.3 GB
2/11/2012 6:44:18 AM :: Finalizing
2/11/2012 6:44:19 AM :: 1 restore point removed by retention policy
2/13/2012 5:05:09 AM :: Busy: Source 83% > Proxy 62% > Network 32% > Target 63%
2/13/2012 5:05:09 AM :: Primary bottleneck: Source
2/13/2012 5:05:09 AM :: Processing finished at 2/13/2012 5:05:09 AM
Finalizing took 46:20:50, when the replication process took a total of 01:30:00 for the 3 disks.
If It could help, This is the only VMs experiencing the issue.
the vmdk disk of the VMs on both source and Destination are split each over 2 VMFS
Anything that I can tune ?
-
- VP, Product Management
- Posts: 27291
- Liked: 2773 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: V6 Replication disk finalizing
Guys, it would help a lot if you could you both open support cases as we cannot reproduce this behavior. Once you do this, please send me your support tickets numbers.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jul 14, 2011 6:17 am
- Full Name: Bruno fleury
- Contact:
Re: V6 Replication disk finalizing
Got it under #5172420
-
- VP, Product Management
- Posts: 6024
- Liked: 2853 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: V6 Replication disk finalizing
Finalizing is the step where the retention policy is being applied. The oldest restore point snapshot is being consolidated into the base disk so this is expected to be longer once you hit your retention period, especially for VMs that have larger incrementals. For example, if your are replicating an 700GB Exchange server, and each incremental run is 40GB, and you are keeping 7 restore points, the first 7 runs will pass the "finalizing" point quickly, however, once you get restore point 8, then Veeam has to delete the oldest restore point/snapshots. Because this involves committing the snapshot to the base disk, a 40GB snapshot means your transferring 80GB of data. This can take quite some time.
So, since this is a fairly large VM, what is the size of your restore points/snapshot files especially in comparison to the others that you state do not display the issue?
So, since this is a fairly large VM, what is the size of your restore points/snapshot files especially in comparison to the others that you state do not display the issue?
Who is online
Users browsing this forum: Semrush [Bot] and 45 guests