-
- Enthusiast
- Posts: 26
- Liked: 1 time
- Joined: Mar 17, 2021 8:54 am
- Full Name: Julien
- Contact:
Veeam V12 SOBR Object Storage dedup
Hello,
With VEEAM V12 and the new ability to store directly backup to object storage (with object storage in the SOBR performance tier), it seems that there's no dedup.
In our case we use AWS S3 bucket
When using S3 bucket as Capacity Tier there's dedup (also on the archive tier).
But it doesnt seems that there's such mechanism on the S3 bucket Performance Tier
Is this normal?
If yes can I use S3 bucket as performance Tier and S3 bucket as capacity tier (and if yes should it be a different S3 bucket) on the SOBR in order to have the dedup (at least on the capacity tier)?
Thanks
With VEEAM V12 and the new ability to store directly backup to object storage (with object storage in the SOBR performance tier), it seems that there's no dedup.
In our case we use AWS S3 bucket
When using S3 bucket as Capacity Tier there's dedup (also on the archive tier).
But it doesnt seems that there's such mechanism on the S3 bucket Performance Tier
Is this normal?
If yes can I use S3 bucket as performance Tier and S3 bucket as capacity tier (and if yes should it be a different S3 bucket) on the SOBR in order to have the dedup (at least on the capacity tier)?
Thanks
-
- Product Manager
- Posts: 9535
- Liked: 2528 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hello
We don‘t have „dedup“ on object storage. When you backup directly to object storage or using it as a capacity tier, we will only copy unique blocks on a backup chain level to the object storage. It‘s forever incremental. But there is one thing to be aware of when doing active full backups.
For direct backup to object storage (also when used in performance tier)
- active full will store the entire size of a full backup
For capacity tier:
- active full will only offload changed blocks
Are you doing active full backups? Or what is your expectation of „dedup“?
There won‘t be any „dedup“ between the performance tier and capacity tier. We don‘t provide such feature.
Best,
Fabian
We don‘t have „dedup“ on object storage. When you backup directly to object storage or using it as a capacity tier, we will only copy unique blocks on a backup chain level to the object storage. It‘s forever incremental. But there is one thing to be aware of when doing active full backups.
For direct backup to object storage (also when used in performance tier)
- active full will store the entire size of a full backup
For capacity tier:
- active full will only offload changed blocks
Are you doing active full backups? Or what is your expectation of „dedup“?
Yes, you can do that. I recommend using different buckets.If yes can I use S3 bucket as performance Tier and S3 bucket as capacity tier (and if yes should it be a different S3 bucket) on the SOBR in order to have the dedup (at least on the capacity tier)?
There won‘t be any „dedup“ between the performance tier and capacity tier. We don‘t provide such feature.
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 26
- Liked: 1 time
- Joined: Mar 17, 2021 8:54 am
- Full Name: Julien
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hello
Thanks for you answer
Yes we are doing weekly active full backup
For now we are using SOBR with local disk for perfomance tier / S3 for capacity tier / S3 Glacier for Archive Tier
More exactly our Veeam Backup server is hosted on AWS and we use EBS disks for performance tier.
We are planning to remove local disk for perfomance tier with S3 bucket, and I wonder if the storage usage on S3 will be the same than performance tier that we get actually on capacity tier.
But if I understand this will not be the case.
Thanks for you answer
Yes we are doing weekly active full backup
For now we are using SOBR with local disk for perfomance tier / S3 for capacity tier / S3 Glacier for Archive Tier
More exactly our Veeam Backup server is hosted on AWS and we use EBS disks for performance tier.
We are planning to remove local disk for perfomance tier with S3 bucket, and I wonder if the storage usage on S3 will be the same than performance tier that we get actually on capacity tier.
But if I understand this will not be the case.
-
- Enthusiast
- Posts: 26
- Liked: 1 time
- Joined: Mar 17, 2021 8:54 am
- Full Name: Julien
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Also you recommand to use different bucket for performance and capacity tier
But the offload task will not take more time and more cost (api and maybe network cost for moving objects)?
But the offload task will not take more time and more cost (api and maybe network cost for moving objects)?
-
- Product Manager
- Posts: 9535
- Liked: 2528 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hi Julien
Best,
Fabian
In that case disable weekly active fulls. Amazon S3 provides good stability and consistency for uploaded objects. Active Fulls are not required. You will see over time the similar space usage as in the capacity tier.Yes we are doing weekly active full backup
The objects still has to be read from the performance tier bucket by a gateway server and then written to the capacity tier bucket. Doesn't matter if it's the same bucket or two buckets. There will be costs for API Calls and most likely the egress traffic.But the offload task will not take more time and more cost (api and maybe network cost for moving objects)?
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 26
- Liked: 1 time
- Joined: Mar 17, 2021 8:54 am
- Full Name: Julien
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hello
Understand, thanks a lot
On last question.
You say that we can disable weekly full backup.
But in this scenario, we don't need to have a capacity tier on the SOBR because I guess that I there's no weekly full, the offload task is unable to upload backup chain to the Capacity Tier (as a backup chain is a full backup with corresponding incremential) ?
Also for new we configure "keep certain full backups for longer archival purposes" to keep weekly for 5 weeks. If there's no weekly full, there's nothing to keep I presume (but we can change the retention policy to match this period)
We also keep monthly backup for 10 years and they are stored to the archive tier.
For this part I think we still need to make Montly Full Backup?
Regards
Understand, thanks a lot
On last question.
You say that we can disable weekly full backup.
But in this scenario, we don't need to have a capacity tier on the SOBR because I guess that I there's no weekly full, the offload task is unable to upload backup chain to the Capacity Tier (as a backup chain is a full backup with corresponding incremential) ?
Also for new we configure "keep certain full backups for longer archival purposes" to keep weekly for 5 weeks. If there's no weekly full, there's nothing to keep I presume (but we can change the retention policy to match this period)
We also keep monthly backup for 10 years and they are stored to the archive tier.
For this part I think we still need to make Montly Full Backup?
Regards
-
- Product Manager
- Posts: 9535
- Liked: 2528 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hi Julien
For object storage repositories, we don't need require a full backup to have a Weekly restore point. We just take a new incremental backup, upload the changed and new blocks as objects and tag that restore point as a new weekly backup.
You must still create those monthly full backups. But it's only a tagged restore point. It's not a real full backup file as on block storage repositories. Just objects combined together from the current incremental backup and previously offloaded objects.
SOBR configuration:
- AWS S3 in performance tier
--> keep backups for your short term retention and those 5 weeklies
- AWS S3 Glacier in archive tier
--> move monthly backups and keep them for 10 years,
--> Archive GFS backups older than 4-5 weeks
Backup Job configuration:
- Target SOBR
- Short Term Retention: 30 days
- GFS retention: 5 weeklies, 120 monthlies
Best,
Fabian
Keep the 5 weekly GFS retention. Just don't do weekly active full backups to save storage costs.Also for new we configure "keep certain full backups for longer archival purposes" to keep weekly for 5 weeks. If there's no weekly full, there's nothing to keep I presume (but we can change the retention policy to match this period)
For object storage repositories, we don't need require a full backup to have a Weekly restore point. We just take a new incremental backup, upload the changed and new blocks as objects and tag that restore point as a new weekly backup.
Copy policy always works. Even when you don't enable regurlar fulls. If you enable weekly fulls, Move policy will also work.But in this scenario, we don't need to have a capacity tier on the SOBR because I guess that I there's no weekly full, the offload task is unable to upload backup chain to the Capacity Tier (as a backup chain is a full backup with corresponding incremential) ?
Capacity tier is optional if you want to go to archive tier and you have object storage in the performance tier. If you leave out the capacity tier, you can save a lot of money.We also keep monthly backup for 10 years and they are stored to the archive tier.
For this part I think we still need to make Montly Full Backup?
You must still create those monthly full backups. But it's only a tagged restore point. It's not a real full backup file as on block storage repositories. Just objects combined together from the current incremental backup and previously offloaded objects.
SOBR configuration:
- AWS S3 in performance tier
--> keep backups for your short term retention and those 5 weeklies
- AWS S3 Glacier in archive tier
--> move monthly backups and keep them for 10 years,
--> Archive GFS backups older than 4-5 weeks
Backup Job configuration:
- Target SOBR
- Short Term Retention: 30 days
- GFS retention: 5 weeklies, 120 monthlies
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 26
- Liked: 1 time
- Joined: Mar 17, 2021 8:54 am
- Full Name: Julien
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
Hello
OK noted, thanks a lot for all this informations
So if don't do any active full backup anymore, is it recommanded (in working fine with object storage), to, on the backup job, enable "perform backup files health check" ?
Because we already had the case with Veeam V11a, to have unconsistant data on Capacity Tier, which was only visible because it was unable to perform a restore or move data to archive tier.
Task failed. Error: REST API error: 'S3 error: The specified key does not exist.
Code: NoSuchKey', error code: 404
Even if this issue was a know issue and it's supposed to be corrected, if we don't do any active full, if we encounter this error again, all restore points will be corrupted
OK noted, thanks a lot for all this informations
So if don't do any active full backup anymore, is it recommanded (in working fine with object storage), to, on the backup job, enable "perform backup files health check" ?
Because we already had the case with Veeam V11a, to have unconsistant data on Capacity Tier, which was only visible because it was unable to perform a restore or move data to archive tier.
Task failed. Error: REST API error: 'S3 error: The specified key does not exist.
Code: NoSuchKey', error code: 404
Even if this issue was a know issue and it's supposed to be corrected, if we don't do any active full, if we encounter this error again, all restore points will be corrupted
-
- Product Manager
- Posts: 9535
- Liked: 2528 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Veeam V12 SOBR Object Storage dedup
You're welcome.
Maybe run it once per month to keep the costs low.
Best,
Fabian
If you can, enable health checks. It's supported for object storage repositories. But please note, it will use a lot of API calls (List, Get) to read the most recent restore point in the S3 bucket for testing.So if don't do any active full backup anymore, is it recommended (in working fine with object storage), to, on the backup job, enable "perform backup files health check" ?
Maybe run it once per month to keep the costs low.
Instead of monthly active fulls, consider doing a backup copy to another repository. Even with object storage I would like to have a second copy in case something happens to my primary backup.Even if this issue was a know issue and it's supposed to be corrected, if we don't do any active full, if we encounter this error again, all restore points will be corrupted
Best,
Fabian
Product Management Analyst @ Veeam Software
Who is online
Users browsing this forum: No registered users and 12 guests