-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Overlapping restore point selection windows on periodic backup copy
Hello community,
because of our main working days (Monday to Friday) we are taking weekly active full backups on Saturday at 01:00 AM and incremental backups Tuesday to Friday also at 01:00 AM. The initial backup goes to a repository with only short retention. We use an immediate backup copy job to a ReFS repository with GFS retention to benefit from block cloning of the synthetic fulls.
We also want a backup copy to an dedup storage appliance. Since there is already extra load on the source backup repository due to the immediate copies and we want to avoid synthetic GFS fulls for this repository (thus read the entire restore point from source), we choose to use periodic backup copies for this and schedule these outside of the backup window. We configured the job to start daily at 03:00 PM and disable data transfer from Tuesday to Saturday from 12:00 AM to 02:59 PM respectively. The idea is to always just copy the restore points created earlier this day.
This process runs fine except for the periodic backup copy interval from Monday to Tuesday which ends with a failed status. If I look at the logfiles I can see that at the end of the interval there is a check if the source backup job started since 48 hours ago (interval started Monday at 03:00 PM, ended Tuesday at 03:00 PM where it is checked if the source job was started since Sunday 03:00 PM). I think the reason for the job failing is that in this interval data transfer is only allowed until 12:00 AM where no restore points exists which can be copied. Then there are new restore points created afterwards which cannot be copied anymore and this is checked at the end of the interval.
I expected a restore point selection from source jobs started between the copy interval start time and copy interval start time - 24 hours. With the described checks above there are overlapping restore point selection windows: In our example the interval from Monday to Tuesday looks for source jobs between Sunday 03:00 PM and Tuesday 03:00 PM and the interval from Tuesday to Wednesday looks for source jobs between Monday 03:00 PM and Wednesday 03:00 PM which overlaps from Monday 03:00 PM till Tuesday 03:00 PM.
Is this correct? Maybe I just don't see the reason for this. For daily copy intervals I would prefer a restore point selection either between interval start and interval start - 24 hours or maybe restore points created during this interval.
Sorry if this is a little bit confusing but I try to get my head around this.
because of our main working days (Monday to Friday) we are taking weekly active full backups on Saturday at 01:00 AM and incremental backups Tuesday to Friday also at 01:00 AM. The initial backup goes to a repository with only short retention. We use an immediate backup copy job to a ReFS repository with GFS retention to benefit from block cloning of the synthetic fulls.
We also want a backup copy to an dedup storage appliance. Since there is already extra load on the source backup repository due to the immediate copies and we want to avoid synthetic GFS fulls for this repository (thus read the entire restore point from source), we choose to use periodic backup copies for this and schedule these outside of the backup window. We configured the job to start daily at 03:00 PM and disable data transfer from Tuesday to Saturday from 12:00 AM to 02:59 PM respectively. The idea is to always just copy the restore points created earlier this day.
This process runs fine except for the periodic backup copy interval from Monday to Tuesday which ends with a failed status. If I look at the logfiles I can see that at the end of the interval there is a check if the source backup job started since 48 hours ago (interval started Monday at 03:00 PM, ended Tuesday at 03:00 PM where it is checked if the source job was started since Sunday 03:00 PM). I think the reason for the job failing is that in this interval data transfer is only allowed until 12:00 AM where no restore points exists which can be copied. Then there are new restore points created afterwards which cannot be copied anymore and this is checked at the end of the interval.
I expected a restore point selection from source jobs started between the copy interval start time and copy interval start time - 24 hours. With the described checks above there are overlapping restore point selection windows: In our example the interval from Monday to Tuesday looks for source jobs between Sunday 03:00 PM and Tuesday 03:00 PM and the interval from Tuesday to Wednesday looks for source jobs between Monday 03:00 PM and Wednesday 03:00 PM which overlaps from Monday 03:00 PM till Tuesday 03:00 PM.
Is this correct? Maybe I just don't see the reason for this. For daily copy intervals I would prefer a restore point selection either between interval start and interval start - 24 hours or maybe restore points created during this interval.
Sorry if this is a little bit confusing but I try to get my head around this.
-
- Product Manager
- Posts: 14808
- Liked: 3068 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello,
I'm not sure I could follow, but I try to answer If the post would have sections where the settings for each job are described and then the problem / question, it would have been easier to understand.
Best regards,
Hannes
I'm not sure I could follow, but I try to answer If the post would have sections where the settings for each job are described and then the problem / question, it would have been easier to understand.
that seems to be the job to the dedupe appliance... at least for small scale, these boxes should "fast enough" with synthetic fulls (which appliance do you have and which protocol do you use?). That would reduce load on the source repository.we want to avoid synthetic GFS fulls for this repository
periodic copy only copies the latest restore point (if multiple would have been created, then it takes only the data for the last one needed)The idea is to always just copy the restore points created earlier this day.
"failed" sounds wrong to me. I remember a warning if no new restore points have been created (Sunday and Monday). Do you maybe have the error message and a support case number where it was investigated? Please post the case number for reference.I think the reason for the job failing is that in this interval data transfer is only allowed until 12:00 AM where no restore points exists which can be copied.
Best regards,
Hannes
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello,
thank you for your effort
Initial backup job to fast storage
We plan to keep certain backups for 5+ years on this device. Since we don't have implemented an automatic backup validation (e.g. SureBackup) yet, we just feel a little safer with active fulls
Here are two log file snippets. The first is from a copy interval Sunday (01/08/2023) to Monday (01/09/2023):
The second is from a copy interval Monday (01/09/2023) to Tuesday (01/10/2023):
The job looks 24 hours back from the starting point (01/08/2023 03:00 PM - 01/09/2023 03:00 PM) which I expected but also checks the interval itself (01/09/2023 03:00 PM - 01/10/2023 03:00 PM). I guess the latter leads to the failed status since there are new backups (created 01/10/2023 01:00 AM) which cannot be copied because of the intentional blackout window.
Now the overlapping restore point selection come into play: The backups from 01/10/2023 01:00 AM should only be copied in the (next) Tuesday to Wednesday interval. The mentioned new restore points are actually copied in this next interval but we want to avoid the failed state of the previous job.
Regards,
Gabor
thank you for your effort
Mainly I talk about two jobs:If the post would have sections where the settings for each job are described and then the problem / question, it would have been easier to understand.
Initial backup job to fast storage
- Running Tuesday to Saturday at 01:00 AM
- Create active fulls on Saturday
- Running daily at 03:00 PM
- Blackout windows from Tuesday to Saturday from 12:00 AM to 03:00 PM
We use a HPE StoreOnce over CoFC in low bandwidth mode which should be capable creating synthetic fulls. The reason for copying active fulls is kind of historical since we are new to Veeam and haven't used synthetic fulls yet.which appliance do you have and which protocol do you use?
We plan to keep certain backups for 5+ years on this device. Since we don't have implemented an automatic backup validation (e.g. SureBackup) yet, we just feel a little safer with active fulls
We don't opened a case yet because I think it's my missing understanding of periodic copies.Please post the case number for reference.
Here are two log file snippets. The first is from a copy interval Sunday (01/08/2023) to Monday (01/09/2023):
Code: Select all
[09.01.2023 15:00:29] <01> Info [TasksFinalizer] Checking if unprocessed VM '***' task session should be failed
[09.01.2023 15:00:30] <01> Info [TasksFinalizer] Searching source job sessions since '07.01.2023 15:00:00'
[09.01.2023 15:00:30] <01> Info [TasksFinalizer] Last time source job was started at '07.01.2023 03:30:12'
[09.01.2023 15:00:30] <01> Info [TasksFinalizer] No appropriate source job session was found - session should not be failed
Code: Select all
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] Checking if unprocessed VM '***' task session should be failed
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] Searching source job sessions since '08.01.2023 15:00:00'
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] Last time source job was started at '10.01.2023 01:00:22'
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] Checking if any target Oibs for entry '***' since '10.01.2023 01:00:22'
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] No Oibs were found
[10.01.2023 15:00:03] <01> Info [TasksFinalizer] No target Oibs since '10.01.2023 01:00:22'. Source job started but nothing was copied to target - session should be failed
Now the overlapping restore point selection come into play: The backups from 01/10/2023 01:00 AM should only be copied in the (next) Tuesday to Wednesday interval. The mentioned new restore points are actually copied in this next interval but we want to avoid the failed state of the previous job.
Regards,
Gabor
-
- Product Manager
- Posts: 14808
- Liked: 3068 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello,
StoreOnce with catalyst usually works with synthetic fulls. IP / Ethernet is faster than FibreChannel in general for StoreOnce. If you do "active full", then this "active full" is from the fast backup target. It's about the "Read the entire restore point from source instead of synthesizing it from increments" from the backup copy job settings. It can be faster than synthetic fulls, but creates more load on the primary backup storage.
Overall, I have the feeling, that the whole thing could be simplified. One backup copy job should be enough. Could you maybe tell us, what you try to achieve?
Best regards,
Hannes
I have the feeling, that it is about three jobs. But I don't understand, why three jobs are needed with only two devicesMainly I talk about two jobs:
StoreOnce with catalyst usually works with synthetic fulls. IP / Ethernet is faster than FibreChannel in general for StoreOnce. If you do "active full", then this "active full" is from the fast backup target. It's about the "Read the entire restore point from source instead of synthesizing it from increments" from the backup copy job settings. It can be faster than synthetic fulls, but creates more load on the primary backup storage.
the configuration of this job is missing, but the error looks logical to me. I just had in mind, it should be warning if nothing was found.We don't opened a case yet because I think it's my missing understanding of periodic copies.
Overall, I have the feeling, that the whole thing could be simplified. One backup copy job should be enough. Could you maybe tell us, what you try to achieve?
Best regards,
Hannes
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello Hannes,
Regards,
Gabor
I tried to focus this topic on the behavior of the periodic copy job. We have 4 devices and 3 locations:I have the feeling, that it is about three jobs.
- The primary backup target is fast but with less space (short retention). This is to keep the backup window (the time of load on the source) low.
- On the same location is a slower ReFS storage with more space (longer retention with GFS). We copy immediately to this repository using synthetic fulls.
- On the other location is the StoreOnce Appliance (longer retention with GFS). This is the periodic copy which we want to run outside of the backup window to keep the load low on the primary backup target during the initial backup.
- There is a tape copy job which runs after the primary backup. The tapes are kept "offline" on the third location.
This is what we want to use. Fulls for GFS backups on the StoreOnce. We can afford the load on the primary backup device since it is (usually) idle at this time.It's about the "Read the entire restore point from source instead of synthesizing it from increments" from the backup copy job settings.
I described the configuration in my last post ("Backup copy job to dedupe appliance"). I missed to mention that we choose the initial backup job as source and selected "Read the entire restore point from source instead of synthesizing it from increments" for GFS.the configuration of this job is missing
The (daily) periodic copy job succeeds if there is no new source restore point (e.g. nothing to copy) and fails if there are new restore points which can't be copied. This behavior would be fine for us but only if the restore point selection is based on a 24 hour time window. Looking at the logs there is a 48 hour time window (job start - 24 hours until job end).I just had in mind, it should be warning if nothing was found.
Regards,
Gabor
-
- Product Manager
- Posts: 14808
- Liked: 3068 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello,
okay, now I have a better understanding, but still no idea why it happens.
Best regards,
Hannes
okay, now I have a better understanding, but still no idea why it happens.
at this point, I would recommend to ask support why it can't be copied. I would send support the settings and see what they say. Please post the case number for reference.fails if there are new restore points which can't be copied.
Best regards,
Hannes
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello,
I'll open a case and keep this topic updated.
Regards,
Gabor
They cannot be copied because at the time the source restore points are created there is an blackout window (the job cannot transfer data during this time) which lasts until the copy period ends. This is intentional and would be no problem if only the source restore points are considered which are created 24 hours before the job starts.I would recommend to ask support why it can't be copied
I'll open a case and keep this topic updated.
Regards,
Gabor
-
- Product Manager
- Posts: 14808
- Liked: 3068 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
the periodic backup copy job should consider everything that was not copied yet. We have customers running periodic jobs only weekly or monthly. Yes, support should check why it does not work.if only the source restore points are considered which are created 24 hours before the job starts
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
For reference, I'll try to simplify and illustrate the problem.
Blackout windows are time windows where data transfer is prohibited in the copy job configuration.
First the working copy interval:
Regards,
Gabor
Blackout windows are time windows where data transfer is prohibited in the copy job configuration.
First the working copy interval:
Code: Select all
| 12 am | 06 am | 12 pm | 06 pm | 12 am | 06 am | 12 pm | 06 pm |
=================================================================
|<--- Backup -->| |<--- Backup -->|
-----------------------------------------------------------------
|<------- Copy interval ------->|<- Copy int. ...
|<-- Blackout ->| |<-- Blackout ->|
- When the copy interval starts the new restore points (from the backup 12:00 AM) are detected and copied. The job immediately succeeds after that copy.
- After that the job waits for the next copy interval and ignores the new restore points created during the copy interval (independent of the blackout window).
Code: Select all
| 12 am | 06 am | 12 pm | 06 pm | 12 am | 06 am | 12 pm | 06 pm |
=================================================================
|<--- Backup -->|
-----------------------------------------------------------------
|<------- Copy interval ------->|<- Copy int. ...
|<-- Blackout ->| |<-- Blackout ->|
- Here the copy job detects no restore points to copy (since there was no source backup job running) and waits for new restore points.
- When the source backup job runs during the copy interval, the copy job wants to copy these restore points.
- Since there is now a blackout window till the end of the copy interval, the restore points cannot be copied. The job fails.
Regards,
Gabor
-
- Product Manager
- Posts: 14686
- Liked: 1693 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello Gabor,
Mind me asking if you've tried the mirror mode instead of the periodic copy? It also allows you to set up the backup windows but instead of relying on intervals it will copy the restore point as soon as it's created.
Mind me asking if you've tried the mirror mode instead of the periodic copy? It also allows you to set up the backup windows but instead of relying on intervals it will copy the restore point as soon as it's created.
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
Hello Dima,
Thank you for your suggestion.
We already doing immediate copies to a ReFS repository on another location which works well.
The StoreOnce device is intended to keep GFS backups for 5+ years and we like to keep active fulls. In immediate copy there are only synthetic fulls.
Regards,
Gabor
Thank you for your suggestion.
We already doing immediate copies to a ReFS repository on another location which works well.
The StoreOnce device is intended to keep GFS backups for 5+ years and we like to keep active fulls. In immediate copy there are only synthetic fulls.
Regards,
Gabor
-
- Product Manager
- Posts: 14808
- Liked: 3068 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
that is solved in V12 (soon to be released) if that helpsIn immediate copy there are only synthetic fulls.
-
- Novice
- Posts: 9
- Liked: never
- Joined: Jan 09, 2023 8:58 am
- Contact:
Re: Overlapping restore point selection windows on periodic backup copy
That are great news.that is solved in V12 (soon to be released) if that helps
Using blackout windows in immediate copy as Dima suggested we could achieve the desired full copies and also have the ability to copy multiple restore points (if at any time more than one restore point is created between the copy windows).
So we are looking forward to V12
Who is online
Users browsing this forum: AdsBot [Google], deivin.chaconvindas, Google [Bot], Paul.Loewenkamp and 65 guests