-
- Lurker
- Posts: 1
- Liked: never
- Joined: Aug 19, 2020 1:48 pm
- Contact:
Backup copy to S3
Hello,
We're using Veeam Backup & Recovery version 10, and would like to copy of backup to a public Cloud bucket (S3 or Azure).
1) If we configure a "backup copy" job, can we choose a public cloud object storage repository as the target backup respository ?
2) Let's say we went with a "Scale-Out Backup" repository, composed of both on-premises storage (Performance Tier) + public cloud object storage (Capacity Tier).
Is there a way to keep recent backup (last 2 days) in both Performance Tier (on-premises storage) AND Capacity Tier (cloud repositories)?
3) Veeam recommends to not configure lifecycle management rules on the S3 bucket.
Does Veeam configure its own LCM rules for the destination bucket, to reduce cost by copying cold data to AWS Glacier, for example?
Or the data will stay on the same Storage Tier (S3 Standard, or S3 IA for example) during all its lifetime?
Thank you,
Best regards
NR
We're using Veeam Backup & Recovery version 10, and would like to copy of backup to a public Cloud bucket (S3 or Azure).
1) If we configure a "backup copy" job, can we choose a public cloud object storage repository as the target backup respository ?
2) Let's say we went with a "Scale-Out Backup" repository, composed of both on-premises storage (Performance Tier) + public cloud object storage (Capacity Tier).
Is there a way to keep recent backup (last 2 days) in both Performance Tier (on-premises storage) AND Capacity Tier (cloud repositories)?
3) Veeam recommends to not configure lifecycle management rules on the S3 bucket.
Does Veeam configure its own LCM rules for the destination bucket, to reduce cost by copying cold data to AWS Glacier, for example?
Or the data will stay on the same Storage Tier (S3 Standard, or S3 IA for example) during all its lifetime?
Thank you,
Best regards
NR
-
- Veeam Software
- Posts: 2010
- Liked: 669 times
- Joined: Sep 25, 2019 10:32 am
- Full Name: Oleg Feoktistov
- Contact:
Re: Backup copy to S3
Hi and Welcome to the Community Forums!
1) No, only object storage configured as capacity tier in scope of SOBR is currently supported. No option to set object storage as a standalone target repository.
2) Yes, with Copy Policy you can mirror backups to capacity tier as soon as they are created.
3) Backup file blocks on destination bucket fall under retention you configured for the job.
Possibility to archive data to colder storage classes like AWS Glacier is announced for v11 (called Archive Tier). It is going to support transferring GFS backups only and only from capacity tier.
Hope I answered your questions,
Oleg
1) No, only object storage configured as capacity tier in scope of SOBR is currently supported. No option to set object storage as a standalone target repository.
2) Yes, with Copy Policy you can mirror backups to capacity tier as soon as they are created.
3) Backup file blocks on destination bucket fall under retention you configured for the job.
Possibility to archive data to colder storage classes like AWS Glacier is announced for v11 (called Archive Tier). It is going to support transferring GFS backups only and only from capacity tier.
Hope I answered your questions,
Oleg
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
Hi,
We have a mount S3 bucket as a local remove drive eg. Y:
Could copy job backup to that Y:?
Will copy job use multi-thread during the copy?
How could we apply the short-term and GFS policy to the job?
We have a mount S3 bucket as a local remove drive eg. Y:
Could copy job backup to that Y:?
Will copy job use multi-thread during the copy?
How could we apply the short-term and GFS policy to the job?
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup copy to S3
How did you do that? Did you use some third-party utility to present object storage as local drive?We have a mount S3 bucket as a local remove drive eg. Y:
If it's configured as local drive, you can add the machine as managed server, assign a repository role to it and use it as a target for backup copy job.Could copy job backup to that Y:?
Does utility through which object storage is configured support multi-threading?Will copy job use multi-thread during the copy?
In the jobs settings - backup copy job has both short and long-term retentions.How could we apply the short-term and GFS policy to the job?
Having said that, it still seems that usage of Capacity and Archive Tiers (native v11 features) will be a better and easier option than the described scenario.
Thanks!
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup copy to S3
Generally speaking, 3rd party cloud object storage gateways are not supported by Veeam.
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
Question to clarify for VBR v11.
1) Will SOBR offload job able to move veeam backup directly into AWS GAD?
2) Does the SOBR offload job use multithread?
3) How do we set the multithread for item (2)?
4) What is the maximum file size that could be move or copy to AWS GAD? 5TB or something else?
5) When SOBR offload job or backup copy runs, will it uses multipart to upload or download the data to AWS GAD?
6) Does SOBR offload job or backup copy use AWS API, does customer have to pay additional cost at AWS side?
1) Will SOBR offload job able to move veeam backup directly into AWS GAD?
2) Does the SOBR offload job use multithread?
3) How do we set the multithread for item (2)?
4) What is the maximum file size that could be move or copy to AWS GAD? 5TB or something else?
5) When SOBR offload job or backup copy runs, will it uses multipart to upload or download the data to AWS GAD?
6) Does SOBR offload job or backup copy use AWS API, does customer have to pay additional cost at AWS side?
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup copy to S3
1. What is GAD? Google is unaware of such a term.
2. Yes.
3. You don't.
4. Doesn't matter, we don't upload backup files but rather individual blocks.
5. Yes.
6. Yes & Yes.
2. Yes.
3. You don't.
4. Doesn't matter, we don't upload backup files but rather individual blocks.
5. Yes.
6. Yes & Yes.
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
1. What is GAD? Google is unaware of such a term. ==> GAD. Glacier Deep Archive.
2. Yes. ==> Ok understand.
3. You don't. ==> Ok understand.
4. Doesn't matter, we don't upload backup files but rather individual blocks.
5. Yes. ==> Ok understand.
6. Yes & Yes. ==> We need t to take note of that on the cost.
2. Yes. ==> Ok understand.
3. You don't. ==> Ok understand.
4. Doesn't matter, we don't upload backup files but rather individual blocks.
5. Yes. ==> Ok understand.
6. Yes & Yes. ==> We need t to take note of that on the cost.
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup copy to S3
1. Upload to Glacier is not direct.
Data goes to regular S3 first (Capacity Tier), then GFS restore points only beyond certain age are archived to DA (Archive Tier). This is because you only want to archive backups you definitely won't need to restore from other than in exceptional situations (like legal requirement), otherwise your Glacier costs will go through the roof after a single restore, erasing all benefits of reduced storage cost.
The two tiers use different formats optimized for costs with the given tier, so data is transformed when it goes from Capacity to Archive tier. S3 format is optimized to reduce storage consumption (since for S3, storage costs are high) - while Glacier format is optimized to reduce API costs (since for Glacier, API costs are high).
Data goes to regular S3 first (Capacity Tier), then GFS restore points only beyond certain age are archived to DA (Archive Tier). This is because you only want to archive backups you definitely won't need to restore from other than in exceptional situations (like legal requirement), otherwise your Glacier costs will go through the roof after a single restore, erasing all benefits of reduced storage cost.
The two tiers use different formats optimized for costs with the given tier, so data is transformed when it goes from Capacity to Archive tier. S3 format is optimized to reduce storage consumption (since for S3, storage costs are high) - while Glacier format is optimized to reduce API costs (since for Glacier, API costs are high).
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
Gostev wrote: ↑Jul 15, 2021 9:12 am 1. Upload to Glacier is not direct.
Data goes to regular S3 first (Capacity Tier), then GFS restore points only beyond certain age are archived to DA (Archive Tier). This is because you only want to archive backups you definitely won't need to restore from other than in exceptional situations (like legal requirement), otherwise your Glacier costs will go through the roof after a single restore, erasing all benefits of reduced storage cost.
The two tiers use different formats optimized for costs with the given tier, so data is transformed when it goes from Capacity to Archive tier. S3 format is optimized to reduce storage consumption (since for S3, storage costs are high) - while Glacier format is optimized to reduce API costs (since for Glacier, API costs are high).
Could we go from Performance tier to Archive tier by skip Capacity Tier?
If it cannot go from Performance to Archive tier directly, could we use or cheap NAS as Capacity Tier?
We don't want to put anything on S3 first as it request additional charge by AWS if not mistaken. AWS will charge pee GB. If you put 2TB there for 2 days, it will charge the fees for it.
Since we don't restore much of the time, may be access 1 or 2 times a year, we are looking at Archive tier.
-
- Product Manager
- Posts: 9846
- Liked: 2604 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Backup copy to S3
No.Could we go from Performance tier to Archive tier by skip Capacity Tier?
And no.If it cannot go from Performance to Archive tier directly, could we use or cheap NAS as Capacity Tier?
- For Azure Archive Tier, Azure capacity tier must be used.
- For AWS Archive Tier, AWS capacity tier must be used.
- Capacity tier can not be left out.
It‘s documented on the limitation page:
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
Product Management Analyst @ Veeam Software
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
Could we then create a bucket of S3 in AWS and default it to DEEP Archive when we create Capacity Tier we refer to this bucket?
There are tools out there in the market that can automatically apply storage class during uploading.
Example below.
There are tools out there in the market that can automatically apply storage class during uploading.
Example below.
Code: Select all
<LifecycleConfiguration>
<Rule>
<ID>id1</ID>
<Filter>
<Prefix>documents/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>0</Days>
<StorageClass>DEEP_ARCHIVE</StorageClass>
</Transition>
</Rule>
<Rule>
<ID>id2</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
-
- Product Manager
- Posts: 9846
- Liked: 2604 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Backup copy to S3
As far as I know, lifecycle management in the object storage is not supported by veeam.
If lifecycle management in the AWS will move the blocks away from the capacity tier, veeam cannot find it's data.
Your backups in capacity tier will get corrupted and veeam cannot restore any data.
https://bp.veeam.com/vbr/VBP/3_Build_st ... bject.html
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
If lifecycle management in the AWS will move the blocks away from the capacity tier, veeam cannot find it's data.
Your backups in capacity tier will get corrupted and veeam cannot restore any data.
https://bp.veeam.com/vbr/VBP/3_Build_st ... bject.html
Object Storage Limitations:Lifecycle Rules & Tiering
Do not configure any tiering or lifecycle rules on object storage buckets used for Veeam Object Storage Repositories. This is currently not supported.
And here is why:
Tiering and lifecycle rules in object storages are based on object age. However, with Veeam’s implementation even a very old block could still be relevant for the latest offloaded backup file when the block was not changed between the restore points. An object storage vendor can not know which blocks are still relevant and which not and thus can not make proper tiering decisions.
The vendor APIs for the different storage products are not transparent. E.g. accessing Amazon S3 or Amazon Glacier requires the use of different APIs. When tiering/lifecycle management is done on cloud provider side Veeam is not aware of what happened and cannot know how to access which blocks.
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
Data in object storage bucket/container must be managed solely by Veeam Backup & Replication, including retention and data management. Enabling lifecycle rules is not supported, and may result in backup and restore failures.
Product Management Analyst @ Veeam Software
-
- Expert
- Posts: 116
- Liked: 3 times
- Joined: Jun 26, 2009 3:11 am
- Full Name: Steven Foo
- Contact:
Re: Backup copy to S3
Thank you. Then we have to explore other solution in the market place.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup copy to S3
Fabian is correct here that life-cycle rules are not supported and currently you cannot skip the Capacity Tier layer.
So in your case the valid approach will be to create a Scale-Out Backup Repository with both Capacity and Archive Tiers configured. Enable move policy for Capacity Tier and set short operational restore window (so sealed backup chain is moved to object storage faster) and do the same for archival window (so GFS restore points are placed on Glacier or Glacier Deep archive sooner, if you prefer).
Thanks!
So in your case the valid approach will be to create a Scale-Out Backup Repository with both Capacity and Archive Tiers configured. Enable move policy for Capacity Tier and set short operational restore window (so sealed backup chain is moved to object storage faster) and do the same for archival window (so GFS restore points are placed on Glacier or Glacier Deep archive sooner, if you prefer).
Thanks!
Who is online
Users browsing this forum: No registered users and 26 guests