-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Restore points removed by retention policy
We have a Replication job which is scheduled to run each hour. Using v7, the job is able to fully complete in approximately 20-25 minutes. When that same Replication job is run using v8, the job now takes 60-70 minutes to complete and it appears to be coming from the change in how old restore points are removed.
From the release notes, v8 now removes unneeded restore points at the end of the completed job rather than as each guest has been completed as was done in v7. Sadly, those happen only 1 at a time. With v7 those deletions have been happening concurrently. Our job logs are showing each restore point removal taking at least 20 - 30 seconds, some significantly longer.
Is there a way to force v8 to do the restore point removal as it was done in v7? Or something I should look at for get them to happen concurrently at the end of the job? Is my only option to split this Replica job into 2 or 3 to be able to meet my every-hour requirement?
From the release notes, v8 now removes unneeded restore points at the end of the completed job rather than as each guest has been completed as was done in v7. Sadly, those happen only 1 at a time. With v7 those deletions have been happening concurrently. Our job logs are showing each restore point removal taking at least 20 - 30 seconds, some significantly longer.
Is there a way to force v8 to do the restore point removal as it was done in v7? Or something I should look at for get them to happen concurrently at the end of the job? Is my only option to split this Replica job into 2 or 3 to be able to meet my every-hour requirement?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Restore points removed by retention policy
There is no way to change this behavior, but we will look at delivering parallel commit in the first patch, as I agree this is an omission on our part. Thanks!
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Restore points removed by retention policy
Good stuff, thanks Anton!
Could you clarify why the restore point removals are now done at the end of the job, rather than as each guest is processed?
Could you clarify why the restore point removals are now done at the end of the job, rather than as each guest is processed?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Restore points removed by retention policy
Please refer to What's New document for details on this and all other changes. Too much to type from the phone
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
[MERGED] replication - retention at end of job
I see that replication jobs now (in v8) apply retention on the end of the job, and not after each VM is completed.
Is it possible to change this behavior to the old method?
Is it possible to change this behavior to the old method?
-
- Enthusiast
- Posts: 88
- Liked: 2 times
- Joined: Jul 31, 2013 12:05 pm
- Full Name: Si
- Contact:
[MERGED] Since v8 retention policy for all VMs is at end of
Hi,
Recently updated to v8 from v7 and have a few issues/questions.
1. The retention policy is applied for all VMs is the end of the replication job, instead of as it goes through the VMs, like before. This means we need more datastore space at DR site, and also it's removing snapshots from 60+ VMs all at the same time and takes even longer than it did before in total. Is there a way to switch this back to how it was in v7?
2. Related to the above issue, because my DR store is slow, and I have 60+ VMs in my job, it timed out removing snapshots. At least, Veeam did, vCenter continued and removed them ok, eventually! Can I increase the timeout? Or only do a few at once? Error:
3. Occasionally, our DR link goes down briefly, and the job fails. Is there a way of increasing the timeout?
4. Occasionally, due to load on our backup storage, the synthetic full fails with the following error. Is there a way of increasing the timeout?
Thanks
Recently updated to v8 from v7 and have a few issues/questions.
1. The retention policy is applied for all VMs is the end of the replication job, instead of as it goes through the VMs, like before. This means we need more datastore space at DR site, and also it's removing snapshots from 60+ VMs all at the same time and takes even longer than it did before in total. Is there a way to switch this back to how it was in v7?
2. Related to the above issue, because my DR store is slow, and I have 60+ VMs in my job, it timed out removing snapshots. At least, Veeam did, vCenter continued and removed them ok, eventually! Can I increase the timeout? Or only do a few at once? Error:
Code: Select all
Failed to apply retention policy for VM: SERVER1_R Error: [DeletingSnapshotsLimiter] Timed out waiting for semaphore DELETING_VI_SNAPSHOT_domain-c7: 10800 sec
4. Occasionally, due to load on our backup storage, the synthetic full fails with the following error. Is there a way of increasing the timeout?
Code: Select all
Synthetic full backup creation failed Error: Insufficient system resources exist to complete the requested service. Failed to read data from the file [G:\Backup\Backup Job - All\Backup Job - All2015-01-09T193610.vbk].
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: [MERGED] Since v8 retention policy for all VMs is at end
No, there is no way to change this behavior.CaptainFred wrote:1. The retention policy is applied for all VMs is the end of the replication job, instead of as it goes through the VMs, like before. This means we need more datastore space at DR site, and also it's removing snapshots from 60+ VMs all at the same time and takes even longer than it did before in total. Is there a way to switch this back to how it was in v7?
Try increasing the SnapshotDeleteSemaphoreTimeoutSec registry key value (default is 10800 seconds).CaptainFred wrote:2. Related to the above issue, because my DR store is slow, and I have 60+ VMs in my job, it timed out removing snapshots. At least, Veeam did, vCenter continued and removed them ok, eventually! Can I increase the timeout? Or only do a few at once? Error:Code: Select all
Failed to apply retention policy for VM: SERVER1_R Error: [DeletingSnapshotsLimiter] Timed out waiting for semaphore DELETING_VI_SNAPSHOT_domain-c7: 10800 sec
This thread can help.CaptainFred wrote:3. Occasionally, our DR link goes down briefly, and the job fails. Is there a way of increasing the timeout?
-
- Expert
- Posts: 184
- Liked: 18 times
- Joined: Feb 15, 2013 9:31 pm
- Full Name: Jonathan Barrow
- Contact:
Re: Restore points removed by retention policy
I also see "Failed to apply retention policy for VM: VM-JBARROW-XP-R Error: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server." on occasion in replication and backup jobs. Our replication jobs run every 2 hours and it just seems random. Going to try and modify the "SnapshotDeleteSemaphoreTimeoutSec" key as suggested but I can't locate it in my registry on the Veeam server. Is this a key I need to create, if so, can you provide details as to where it should go?
-
- Expert
- Posts: 184
- Liked: 18 times
- Joined: Feb 15, 2013 9:31 pm
- Full Name: Jonathan Barrow
- Contact:
Re: Restore points removed by retention policy
Called support. This wasn't my issue. Seems there is a bug in the latest version 8.0.0.917 causing this if parallel processing is enabled. Ticket 00845822. No workaround (except to disable parallel processing) or hotfix. We just have to live with the error until the patch.
Who is online
Users browsing this forum: No registered users and 40 guests