Discussions related to using object storage as a backup target.
Post Reply
andy2322
Lurker
Posts: 1
Liked: never
Joined: Jun 17, 2021 8:31 am
Full Name: Andreas Fey
Contact:

Does deduplication works also on archive tier ?

Post by andy2322 »

Hi there,

I have a scale out backup repository using AWS S3 as capacity tier and AWS Glacier Deep Archive as archive tier.
My question is: If the data is moved from capacity tier to archive tier does deplication still works ?
For example if a big datablock is moved to archive tier does it need to be uploaded next time to the capacity tier again ?
I dont hope so :-)

And another question: Archive tiering works only on inactive backup chains. So if I do a fullbackup on each month I' have to wait until the next month before the data is moved to the cheaper archive tier. But I want the data to move out as soon as possible from the expensive s3 storage.
The solution would be to do full backups more often (every week for example) but this will crash my performance tier (NAS) because each fullbackup is about 9 TB.

Thank You for any tip.

Greetings, Andreas.
Mildur
Product Manager
Posts: 9848
Liked: 2607 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Does deduplication works also on archive tier ?

Post by Mildur » 1 person likes this post

Hi Andreas
My question is: If the data is moved from capacity tier to archive tier does deplication still works ?
For example if a big datablock is moved to archive tier does it need to be uploaded next time to the capacity tier again ?
Your decision :-)
You can configure if blocks will reused or if veeam must use unique blocks for each offloaded full.
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
“Select the Store archived backups as standalone fulls check box to forbid reuse of the data blocks“
And another question: Archive tiering works only on inactive backup chains. So if I do a fullbackup on each month I' have to wait until the next month before the data is moved to the cheaper archive tier. But I want the data to move out as soon as possible from the expensive s3 storage.
The solution would be to do full backups more often (every week for example) but this will crash my performance tier (NAS) because each fullbackup is about 9 TB.
Only GFS Fullbackups can be offloaded to the archive tier, as soon the chain is inactive. You will always need a full on the capacity tier. You can configure the archive tier to offload a full as soon as the chain gets inactive.

But a full in the capacity tier is never the entire size, only changed unique blocks. Like an incremental backup.
With offloading the inactive full backup to the archive tier right away, you won‘t save the entire space of a full in the capacity tier. Most of the objects in the capacity tier are connected to other fulls and incremental restorepoints. Veeam can not delete all them with offloading to the Archive tier.

https://helpcenter.veeam.com/docs/backu ... ml?ver=110

The following types of backup files are suitable for archive storage:
- Backup files with GFS flags assigned
- VeeamZIP backup files
- Exported backup files
- Orphaned backups
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests