-
- Service Provider
- Posts: 47
- Liked: 2 times
- Joined: Oct 26, 2017 11:22 am
- Full Name: Victor
- Contact:
9.5 Update 4 and Amazon S3
Hi,
I have been given early access on update 4 and have been testing out S3 from a Scalityn on-prem.
First of all, I don't like the restrictions that you must have Scale out and cannot copy to it directly.
The design, in this case, is GFS and spreads the first selection between two sites.
It is two big backup proxies that act repositories as well.
They handle all backups in the initial step but archiving isn't a part of there design.
Archive repository is the Scality.
There is where the scaleout is a big problem now I have to use the backup proxies as a staging for all archive backups which is a lot of data instead of sending it directly.
With that design having a performance tier as staging for archiving gives double backups on sites when the archive job is another selection then the first job.
So you understand the first selection is based on storage/luns.
So one job per lun.
Second selection on arching is based on the customers choice if they want longer retention.
Which is on VM level.
Is there any plans in the future including S3 repository as a regular repository that you can backup copy to?
If not can you create a feature request on that?
Secondly is there any reg key for how often the offloading process runs?
As I can see in my test now it is running every 4th hour.
I want it to check even sooner so that as a small portion of the data is on performance tier.
One more question regarding the override option on scaleout.
How often does it checks disk space and days the backup has been on disk?
Best regards!
Victor
I have been given early access on update 4 and have been testing out S3 from a Scalityn on-prem.
First of all, I don't like the restrictions that you must have Scale out and cannot copy to it directly.
The design, in this case, is GFS and spreads the first selection between two sites.
It is two big backup proxies that act repositories as well.
They handle all backups in the initial step but archiving isn't a part of there design.
Archive repository is the Scality.
There is where the scaleout is a big problem now I have to use the backup proxies as a staging for all archive backups which is a lot of data instead of sending it directly.
With that design having a performance tier as staging for archiving gives double backups on sites when the archive job is another selection then the first job.
So you understand the first selection is based on storage/luns.
So one job per lun.
Second selection on arching is based on the customers choice if they want longer retention.
Which is on VM level.
Is there any plans in the future including S3 repository as a regular repository that you can backup copy to?
If not can you create a feature request on that?
Secondly is there any reg key for how often the offloading process runs?
As I can see in my test now it is running every 4th hour.
I want it to check even sooner so that as a small portion of the data is on performance tier.
One more question regarding the override option on scaleout.
How often does it checks disk space and days the backup has been on disk?
Best regards!
Victor
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hey there Victor.
The Cloud Tier as it appears in Update 4 is a way to move data from more expensive storage to what is relatively cheaper storage based on policies set within the properties of a Scale Out Backup Repository. As the name suggests it is a tiering feature and not archival one, which is where your questions are coming from.
That said, I'll address the easier part of you post first...that is the offloading process.
There is a manual way to do this from the UI to force the job. If you control-click on the SOBR name as shown below there is an option to "Run Tiering Job Now" This will run the job on demand.
There is also a PowerShell Command that you can run to achieve the same result
You could obviously run this as a scheduled task if desired.
In terms of the question around how often we check for the override option, we are tracking that in the database and it's worked out based on known % of remaining space of the extents, but I am seeking further clarification on the mechanisms used. Stand bye for that.
The Cloud Tier as it appears in Update 4 is a way to move data from more expensive storage to what is relatively cheaper storage based on policies set within the properties of a Scale Out Backup Repository. As the name suggests it is a tiering feature and not archival one, which is where your questions are coming from.
That said, I'll address the easier part of you post first...that is the offloading process.
There is a manual way to do this from the UI to force the job. If you control-click on the SOBR name as shown below there is an option to "Run Tiering Job Now" This will run the job on demand.
There is also a PowerShell Command that you can run to achieve the same result
Code: Select all
Start-VBRCapacityTierSync -Repository SOBRNAME
In terms of the question around how often we check for the override option, we are tracking that in the database and it's worked out based on known % of remaining space of the extents, but I am seeking further clarification on the mechanisms used. Stand bye for that.
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Service Provider
- Posts: 47
- Liked: 2 times
- Joined: Oct 26, 2017 11:22 am
- Full Name: Victor
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi Anthony,
Okay, so regarding using S3 directly for Backup Copy jobs there are no plans for that?
Because as you probably understand S3 storage in our case is the archival storage that we want to Backup Copy directly to.
Thanks for the Powershell Command.
Looking forward to hear from you regarding the options.
Best Regards!
Victor
Okay, so regarding using S3 directly for Backup Copy jobs there are no plans for that?
Because as you probably understand S3 storage in our case is the archival storage that we want to Backup Copy directly to.
Thanks for the Powershell Command.
Looking forward to hear from you regarding the options.
Best Regards!
Victor
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).
Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.
This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.
Thanks!
Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.
This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.
Thanks!
-
- Service Provider
- Posts: 47
- Liked: 2 times
- Joined: Oct 26, 2017 11:22 am
- Full Name: Victor
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi Gostev,
Very nice to hear.
Do you any ETA for that update?
Thanks a lot for the quick responses!
Best regards!
Victor
Very nice to hear.
Do you any ETA for that update?
Thanks a lot for the quick responses!
Best regards!
Victor
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Sometimes in H2 2019 perhaps... we're just too early in the release cycle to estimate.
-
- Novice
- Posts: 8
- Liked: 2 times
- Joined: Mar 20, 2018 7:51 pm
- Full Name: Ryan Walker
- Contact:
Re: Been testing out Update 4 and S3, some questions
Good to know! This is actually exactly what we'd be looking for ourselves (the way it is) as we will have 2 off-site copies, one in a Private Data Center/Cloud repository, and then once it ages to X it should be moved fully up to a public cloud (s3 is our current choice) for long term retention.Gostev wrote: ↑Jan 08, 2019 3:45 pm Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).
Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.
This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Yes, that's the plan - and is exactly how it will work if you have both "copy" and "move" check boxes selected at the same time.
-
- Service Provider
- Posts: 42
- Liked: 32 times
- Joined: Aug 07, 2017 11:51 am
- Full Name: William
- Location: Zurich, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi AntonGostev wrote: ↑Jan 08, 2019 3:45 pm Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).
Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.
This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.
Thanks!
Good to hear, that this is on the roadmap. I have a lot of customers that are looking for a way to have a copy on object store (Ceph, Cloudian). Their issue is not addressed with U4.
It would make customers live much simpler, If they could use S3 as they use tape now to have a offsite copy (and of course in an automated way).
Looking forward to the feature.
Doron
-
- Enthusiast
- Posts: 52
- Liked: never
- Joined: Oct 28, 2015 9:36 pm
- Full Name: Joe Brancaleone
- Contact:
Re: Been testing out Update 4 and S3, some questions
Question related to this: I installed Update 4 and set up an S3 bucket to test/implement cloud tiering. When configuring the cloud extent to point to the bucket (using keys from a new IAM user setup just for this), it looks like it requires a specific folder in the bucket. However the bucket folder we created is not browsable from the extent setup. This is peculiar because doing an ls command from the AWS cli shows the folder. Is there some additional IAM Action needed for the user to make the folder read-writable?
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hey there Joe.
You should create the folder as part of the Object Storage Repository setup through the wizard. There are also PowerShell commands to do the same. In terms of what you have seen, that might be related to the way in which we create the folder from the Veeam Backup & Replication console.
You should create the folder as part of the Object Storage Repository setup through the wizard. There are also PowerShell commands to do the same. In terms of what you have seen, that might be related to the way in which we create the folder from the Veeam Backup & Replication console.
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Enthusiast
- Posts: 52
- Liked: never
- Joined: Oct 28, 2015 9:36 pm
- Full Name: Joe Brancaleone
- Contact:
Re: Been testing out Update 4 and S3, some questions
Ah ok, got it. I was going through the wrong wizard. What is the intended difference in setting up External Repository for S3 vs creating new Repository and selecting Object Storage -> S3 repository?
Also does it make sense for the IAM user to have a Delete Object permission for the data in the bucket? Does Veeam have the functionality to go in and delete backup data?
Also does it make sense for the IAM user to have a Delete Object permission for the data in the bucket? Does Veeam have the functionality to go in and delete backup data?
-
- Product Manager
- Posts: 20439
- Liked: 2310 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Been testing out Update 4 and S3, some questions
External Repository is an S3 repository created by Cloud Protection Manager to store long-term backups there. Once created and filled with CPM backup data, it can be then added to a backup server for further backup discovery, data recovery or data offload (to on-prem repository via backup copy job).What is the intended difference in setting up External Repository for S3 vs creating new Repository and selecting Object Storage -> S3 repository?
S3 Object Storage Repository is a capacity extent of Scale-Out Backup Repository to which backup files get offloaded, once they age out of operational restore window.
Correct, as those files will be deleted, once they fall out of backup job retention period.Does Veeam have the functionality to go in and delete backup data?
Thanks!
-
- Veteran
- Posts: 291
- Liked: 25 times
- Joined: Mar 23, 2015 8:30 am
- Contact:
Re: Been testing out Update 4 and S3, some questions
If I have configured a SOBR with S3 for older files and I will loose my complete OnPremise SOBR due to a disaster. Am I able to restore any data from the S3 bucket without having a functional SOBR or are these files useless in that case?
Thx,
Sandsturm
Thx,
Sandsturm
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Yes, you will be able to restore all data from the S3 bucket. You still need a functional SOBR to have backup file shells automatically re-created there before performing the restore, but it can be just a single extent - and you don't need much space since those VBK shells contain metadata only. Basically, in case of a complete disaster, all your needs are covered by installing B&R on your laptop - and you can start performing restores from there.
-
- Veteran
- Posts: 291
- Liked: 25 times
- Joined: Mar 23, 2015 8:30 am
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi gostev
Thanks for your answer, that sound great. Just for my understanding: Are these VBK shells containing the metadata stored on the SOBR AND on the S3-bucket as well? Or how does this process with the recreation of this VBK shells work, where is this information stored?
thx,
sandsturm
Thanks for your answer, that sound great. Just for my understanding: Are these VBK shells containing the metadata stored on the SOBR AND on the S3-bucket as well? Or how does this process with the recreation of this VBK shells work, where is this information stored?
thx,
sandsturm
-
- Veeam Legend
- Posts: 945
- Liked: 222 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Been testing out Update 4 and S3, some questions
sandsturm,
AFAIK, metadata are stored on SOBR and on the cloud storage, as you already assumed. The shells only contain the metadata and the metadata points to the location of the data (in this case it would be the cloud storage). So if your SOBR is running fine, veeam reads the metadata from the SOBR, sees that the data is in the cloud and fetches it from there. If your SOBR had a disaster, don't worry because metadata has also been transferred to the cloud storage and therefore you will be able to do the restore.
Please correct me if I'm wrong but that's what I've kept in mind from the last VeeamON event.
AFAIK, metadata are stored on SOBR and on the cloud storage, as you already assumed. The shells only contain the metadata and the metadata points to the location of the data (in this case it would be the cloud storage). So if your SOBR is running fine, veeam reads the metadata from the SOBR, sees that the data is in the cloud and fetches it from there. If your SOBR had a disaster, don't worry because metadata has also been transferred to the cloud storage and therefore you will be able to do the restore.
Please correct me if I'm wrong but that's what I've kept in mind from the last VeeamON event.
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Michael, you are spot on.
-
- Veteran
- Posts: 291
- Liked: 25 times
- Joined: Mar 23, 2015 8:30 am
- Contact:
Re: Been testing out Update 4 and S3, some questions
Very well!
Something else would have surprised me very much with veeam also
thx,
sandsturm
Something else would have surprised me very much with veeam also
thx,
sandsturm
-
- Veeam Legend
- Posts: 945
- Liked: 222 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi Anthony,anthonyspiteri79 wrote: ↑Jan 07, 2019 1:28 pm There is a manual way to do this from the UI to force the job. If you control-click on the SOBR name as shown below there is an option to "Run Tiering Job Now" This will run the job on demand.
There is also a PowerShell Command that you can run to achieve the same result
Code: Select all
Start-VBRCapacityTierSync -Repository SOBRNAME
I'm using the GA-build of Update 4 and I've setup a SOBR with capacity tier (azure blob storage). While the powershell command works, I'm not able to select the option in the context menu:
Am I missing somehing?
-
- Product Manager
- Posts: 20439
- Liked: 2310 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Been testing out Update 4 and S3, some questions
Have you pressed ctrl? It should be ctrl+right click.
-
- Veeam Legend
- Posts: 945
- Liked: 222 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Been testing out Update 4 and S3, some questions
oh, it seems I've misunderstood the word "control-click". Of course it works if press ctrl, so thankts for the clarification, Vladimir. BTW: Why is this option "hidden"? Wouldn't it make sense to always access it in the context menu?
-
- Product Manager
- Posts: 20439
- Liked: 2310 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Been testing out Update 4 and S3, some questions
We don't believe there is a need to use this option other than on a special occasion, since backup are automatically scanned every 4 hours anyway to determine candidates and start offload to Capacity Tier. It's hard to imagine someone wanting to continuously trigger this rescan before the next automatic run, other than perhaps for the purpose of doing demo or POC. Thanks!
-
- Service Provider
- Posts: 47
- Liked: 2 times
- Joined: Oct 26, 2017 11:22 am
- Full Name: Victor
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi again,
Just got some news that I didn't know and pretty much sends all my designs that backup all in the first step to worthless for adding S3.
"Second, please note that for Backup Copy jobs only fulls with assigned GFS flag are moved to capacity tier (for a reference, please check next User Guide page: https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).
Backup Chain Legitimacy - Veeam Backup Guide for vSphere
Before transferring data to object storage repositories, Veeam validates the backup chain state to ensure that the restore points to be offloaded belong to an inactive backup chain. Inactive Backup...
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).
This means for a design where you backup all with storage snapshot and spread all backup to two sites.
Then you want to spread some backups/vms, (think customers that pay more) out to your third site with S3 storage.
Then you can only send weekly backups to your third site.
Not daily which means that you will have 6 days of no backup on S3.
How did you think that is a good design and what lead to that decision?
In my opinion, S3 is the new storage that gonna explode in implementations on prem and the cloud in the coming years.
So having a good design for it is very important.
Best Regards!
Just got some news that I didn't know and pretty much sends all my designs that backup all in the first step to worthless for adding S3.
"Second, please note that for Backup Copy jobs only fulls with assigned GFS flag are moved to capacity tier (for a reference, please check next User Guide page: https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).
Backup Chain Legitimacy - Veeam Backup Guide for vSphere
Before transferring data to object storage repositories, Veeam validates the backup chain state to ensure that the restore points to be offloaded belong to an inactive backup chain. Inactive Backup...
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).
This means for a design where you backup all with storage snapshot and spread all backup to two sites.
Then you want to spread some backups/vms, (think customers that pay more) out to your third site with S3 storage.
Then you can only send weekly backups to your third site.
Not daily which means that you will have 6 days of no backup on S3.
How did you think that is a good design and what lead to that decision?
In my opinion, S3 is the new storage that gonna explode in implementations on prem and the cloud in the coming years.
So having a good design for it is very important.
Best Regards!
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
I actually do agree with you that S3 storage has a great potential to explode in the coming years, however reliability today is paramount - and we're not comfortable with going "all in" on its support right away for all possible use cases. Yes, we do learn on our mistakes, remember the ReFS story?
This is exactly why we started to leverage S3 storage from your least important data (oldest backups). And in hindsight, it was a great decision - as the number of storage-specific quirks we encountered in testing of real-world S3 storage implementations was totally unforeseen by us.
This is exactly why we started to leverage S3 storage from your least important data (oldest backups). And in hindsight, it was a great decision - as the number of storage-specific quirks we encountered in testing of real-world S3 storage implementations was totally unforeseen by us.
-
- Service Provider
- Posts: 47
- Liked: 2 times
- Joined: Oct 26, 2017 11:22 am
- Full Name: Victor
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hi Gostev,
Thanks for the quick reply.
Yea I do understand that you need to be able to support your product on your side.
But can you guys put in a feature request regarding not needing the active chain on performance tier.
So that you can send the inc data to S3 with Backup copy jobs.
Best regards!
Thanks for the quick reply.
Yea I do understand that you need to be able to support your product on your side.
But can you guys put in a feature request regarding not needing the active chain on performance tier.
So that you can send the inc data to S3 with Backup copy jobs.
Best regards!
-
- Product Manager
- Posts: 20439
- Liked: 2310 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Been testing out Update 4 and S3, some questions
In fact, support for Capacity Tier copy mode (copying backups to object storage as soon as they are created) is already scheduled for the next product release. Thanks!
-
- Chief Product Officer
- Posts: 31836
- Liked: 7328 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Been testing out Update 4 and S3, some questions
Yep, I can confirm devs are working on it as we speak!
-
- Expert
- Posts: 239
- Liked: 13 times
- Joined: Feb 14, 2012 8:56 pm
- Full Name: Collin P
- Contact:
Re: Been testing out Update 4 and S3, some questions
I’m having a hard time understanding this. We don’t use backup copy jobs. I would like to backup directly to S3. So if we lose our data center and our local scaleout
Repository, last nights backups are in S3. We do a monthly active full and daily incrementals. I don’t mind also storing data locally for quick restores but the most important feature is that backups go offsite as soon as possible either while the backup is running or shortly after the backup finishes.
Is this possible with the current version either natively or by scheduling the powershell sync command frequently?
Repository, last nights backups are in S3. We do a monthly active full and daily incrementals. I don’t mind also storing data locally for quick restores but the most important feature is that backups go offsite as soon as possible either while the backup is running or shortly after the backup finishes.
Is this possible with the current version either natively or by scheduling the powershell sync command frequently?
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Been testing out Update 4 and S3, some questions
Hey there Collin.
For your particular use case you should still be considering Cloud Connect Backup to a Veeam Cloud and Service Provider partner. That is the only solution that will suit your requirements.
There is nothing stopping you using Backup Copy Jobs with Cloud Tier, but you have to understand the correlation between when a backup chain is sealed and when it falls outside of the operational restore window. If you configure GFS backups, those are certain candidates for offloading to Capacity Tier. If you loose your datacenter and just use Cloud Tier, anything that falls outside of the operational restore window will be lost.
With monthly active fulls it will take up until that is completed for the chain to be sealed and then data offloaded to the Capacity Tier.
For data to go offsite ASAP, you need to consider Cloud Connect Backup.
For your particular use case you should still be considering Cloud Connect Backup to a Veeam Cloud and Service Provider partner. That is the only solution that will suit your requirements.
There is nothing stopping you using Backup Copy Jobs with Cloud Tier, but you have to understand the correlation between when a backup chain is sealed and when it falls outside of the operational restore window. If you configure GFS backups, those are certain candidates for offloading to Capacity Tier. If you loose your datacenter and just use Cloud Tier, anything that falls outside of the operational restore window will be lost.
With monthly active fulls it will take up until that is completed for the chain to be sealed and then data offloaded to the Capacity Tier.
For data to go offsite ASAP, you need to consider Cloud Connect Backup.
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Who is online
Users browsing this forum: NightBird and 14 guests