Discussions related to using object storage as a backup target.
Post Reply
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

Hi Folks,
please advise what are the steps or a link to documentation describing how to manually trigger a copy from Capacity Tier to Archive Tier for a GFS configured job? We need to test Azure Archive Tier process before committing to a production ready config. So far the job from the performance tier copied to the capacity tier but we don't see any moves to the Archive tier. We have set the Archive backup files older than N days field to 0.
Thanks!
Geo
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

I'm wondering whether it was GFS restore point that got copied to Capacity Tier and not some simple full or incremental backup? If you are sure about the type of restore point and it still is not transferred to Archive Tier during the nearest archiving session, then, open a ticket with our support team for further investigation.

Thanks!
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

There are only two active fulls were made for that job for a single VM. No incrementals. No Schedules enabled. The Capacity tier has checkmarks for both Copy and Move options. Obviously the copy job worked each time and that where it stops. Ok, I'll open a support case and update the forum later. But could anyone explain please the definition for "nearest" archiving session? How is being triggered, when and is it possible to trigger it manually? Thanks
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

You can open backup properties and see whether one of the existing points has been marked with GFS flag. If not, then the situation is expected, as only GFS restore points are offloaded to Archive Tier.
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

The retention column is missing from the Backup property view, so I am assuming the jobs were not marked as GFS, even the GFS setting is defined for the job. Changing the Weekly GFS from Sunday to Thursday did not marked the existing 2 Active Full jobs with GFS flags. So the question remains, how do users "force" the testing of the full migration process Performance Tier-Capacity Tier-Archive Tier with a single or limited number backup jobs in a short time window, say in a single session? The goal is to present customer or management with a validated working configuration, once the configuration has been created. Thanks.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

Currently there is no manual archiving process available. Even if it were, it would work for GFS restore points and standalone full backups (VeeamZIP and Exported).

You still need to configure the job properly, so it creates GFS restore point correctly. After that the point will be moved to Archive Tier during the closest offload session.

Thanks!
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

The GFS Policy is set only in one place in the Job. What do you mean by "properly"? Its configured to the letter following Veeam documentation. I suspect, and hopefully support can confirm it, Veeam does not mark first full backup jobs with GFS flags. Looking at https://helpcenter.veeam.com/docs/backu ... ml?ver=110, it seems there need to be a cycle of jobs before first Weekly Flag assigned. If that is the case, that does not leave much room for validation process for archival tier during the deployment and raising unnecessary costs for the customer.

Is there any documentation or explanation defining "closest offload session"? Basically, if the GFS flag set, When the offload session starts?

Finally, if there a "cycle" of jobs needs to go through before data GFS flagged, this requirement raises a question how to archive VMs that are not part of the scheduled backup flow, required to be removed from production but business needs it to be archived?
Thanks.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

The easiest way to confirm this is to:

- Create a SOBR
- Configure a Capacity Tier Copy policy for it
- Configure an Archive Tier with archive period equal to 0
- Create a VeeamZIP backup on the SOBR

As soon as the backup is created, it should be copied to Capacity Tier and then offloaded to Archive Tier.

Thanks!
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

That's exactly how it was configured on our end. The only problem first two Active full jobs did not received GFS flags. Running another job today just got Weekly flag.

The Archive offloading process seems to be executing every 4 hours, judging by the Last 24hrs jobs record. Restarting the Veeam B&R Services does not help to re-trigger offloading job. Still no answer from Veeam support team, even so the logs uploaded yesterday.

Folks, it would be very helpful if
a) When the GFS policy set on the job, the first Active Full automatically gets at least a Weekly flag, regardless of the weekday its run on. Especially if "Archive backup files older than N days field to 0" set. Just think about it - most configurations performed on the weekdays. Most full Backup Jobs runs on Weekends.
b) Archive tier in Veeam documentation should have explanation on how and when its triggered to run. If its every 4 hours - how to trigger the reset of the counter? and
c) specifically to list VeeamZip option to validate the successful configuration if changing the flag request from "a)" not feasible to do in the near future.

Its important to understand not all customers manages their Cloud resources or Veeam deployments. Having another consulting resource to prep for the archiving appliance and than waiting unknown amount of time to validate it drives up unnecessary costs. Thanks.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

Thank you for the feedback; we will think about your proposals in the future. For now VeeamZIP option still seems to be the easiest confirmation option.

Also, you can lower the offload period schedule, using this regkey:

Code: Select all

SOBRArchivingScanPeriod (set number of hours with it, 4 is the default)
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by Gostev » 1 person likes this post

george@itb wrote: Mar 18, 2022 8:27 pmhow to trigger the reset of the counter?
Ctrl+right-click the scale-out backup repository and select "Run Tiering Job Now".
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

Thanks Gents, both tips really helpful to know. Those should be part of the documentation related to object storage and tiering or just remove "CTRL" from the right mouth click :)

Last question on this subject, is there a way (via registry hack or any other way, besides creating separate jobs) to filter out incremental jobs from the staging process to the object storage in GFS policy? Here is why: the customer uses Azure Cool bucket as a capacity tier for the cost savings reasons. The data in the Cool tier need to resides for 30 days to avoid early deletion fee. Its makes much more business sense to be able to store only Weekly Fulls in the Capacity Tier for such configuration (cool tier and flexible RPO) than accumulating 30 days worth of incremental backup copies.
Its become counter-productive, on one hand trying to save on the Azure Cool versus Hot storage costs, but than forced to bump-up capacity usage significantly when trying to avoid penalty for early deletion.

If there is no such option, would be really nice to have this feature as part of "Copy" option for the Scale Out Repo. Similar to "Override" option next to the "Move". Only for the "Copy" section the "Override" we could specify - do not copy Incrementals past X number of chain blocks, etc.

Thanks a lot in advance!
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by george@itb »

Gostev, just FYI, selecting "Run Tiering Job Now" - did not triggered copy to the Archive tier, it run only against Capacity tier in our tests. The capacity tier had a new Full job with "W" retention flag, but the invoked process reported move only against the Capacity tier.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by veremin »

If you open a History view and locate Storage management node there, will you see two sessions (offload and archiving) or not? If not, then proceed with opening a support ticket, as the current behaviour does not seem to be expected. Thanks!
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: How to manually trigger GFS copy to Archive Tier?

Post by Gostev »

george@itb wrote: Mar 25, 2022 8:05 pmIts makes much more business sense to be able to store only Weekly Fulls in the Capacity Tier for such configuration (cool tier and flexible RPO) than accumulating 30 days worth of incremental backup copies.
Remember that scale-out backup repository is still just a backup repository, even if a smarter one that can create multiple copies of restore points for redundancy, tier them depending on the age, etc. But in the end it just a storage that stores whatever data a backup job sends to it, and without ANY modifications (like pruning). It's always on a backup job to control what exactly is sent, how often, for how long it should be stored etc. While SOBR has no say but to accept and store all incoming restore points according to its tiering and redundancy policies. Anything different would result in a huge data management mess that is exceptionally hard to make robust due to multiple entities managing lifecycle of the same data.

A good way to mentally treat SOBR is to think of a good old RAID1. Imagine yourself asking a RAID vendor to have an option "do not copy Veeam incrementals past X number of chain blocks" to the second drive. Conceptually it makes about as much sense as asking SOBR for the same! If you want to store only Weekly Fulls in your RAID1, or in your SOBR, you should create a job that only creates Weekly Fulls in the first place.

Having said that, the desire to store a copy of certain specific backups (like weekly GFS) somewhere aside, and often on a different retention even, is of course not somehow invalid. Most of our customers have been doing this all along using Backup Copy jobs (for disk and cloud targets) and Backup to Tape jobs (for tape targets). These jobs allow you to pick and choose which restore points need to be copied and which are not, how long they need to be kept etc. The only problem is that Backup Copy jobs cannot be pointed to object storage yet. But V12 adds this capability.
george@itb wrote: Mar 25, 2022 8:05 pmThose should be part of the documentation related to object storage and tiering or just remove "CTRL" from the right mouth click :)
To be honest it's only there for QC/demo purposes, thus hidden and not documented. Again, in normal life with typical offload window values of weeks or even months, there should not be any need to trigger the offload to start ASAP. Taking 30 days window as an example, we're talking a difference between 720 hours or 724 hours (worst case scenario) after the backup file has been created when the offload will start automatically. And 722 hours on average. So why would anyone monitor this closely for one particular backup file, just to open the UI on the 721st hour since its creation to trigger the offload manually, some minutes before it will start automatically anyway?
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 9 guests