Discussions related to using object storage as a backup target.
Post Reply
inferno66
Enthusiast
Posts: 26
Liked: 1 time
Joined: Mar 17, 2021 8:54 am
Full Name: Julien
Contact:

Veeam V12 SOBR Object Storage dedup

Post by inferno66 »

Hello,

With VEEAM V12 and the new ability to store directly backup to object storage (with object storage in the SOBR performance tier), it seems that there's no dedup.
In our case we use AWS S3 bucket

When using S3 bucket as Capacity Tier there's dedup (also on the archive tier).
But it doesnt seems that there's such mechanism on the S3 bucket Performance Tier
Is this normal?

If yes can I use S3 bucket as performance Tier and S3 bucket as capacity tier (and if yes should it be a different S3 bucket) on the SOBR in order to have the dedup (at least on the capacity tier)?

Thanks :)
Mildur
Product Manager
Posts: 7620
Liked: 1985 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by Mildur »

Hello

We don‘t have „dedup“ on object storage. When you backup directly to object storage or using it as a capacity tier, we will only copy unique blocks on a backup chain level to the object storage. It‘s forever incremental. But there is one thing to be aware of when doing active full backups.
For direct backup to object storage (also when used in performance tier)
- active full will store the entire size of a full backup

For capacity tier:
- active full will only offload changed blocks

Are you doing active full backups? Or what is your expectation of „dedup“?
If yes can I use S3 bucket as performance Tier and S3 bucket as capacity tier (and if yes should it be a different S3 bucket) on the SOBR in order to have the dedup (at least on the capacity tier)?
Yes, you can do that. I recommend using different buckets.
There won‘t be any „dedup“ between the performance tier and capacity tier. We don‘t provide such feature.

Best,
Fabian
Product Management Analyst @ Veeam Software
inferno66
Enthusiast
Posts: 26
Liked: 1 time
Joined: Mar 17, 2021 8:54 am
Full Name: Julien
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by inferno66 »

Hello

Thanks for you answer
Yes we are doing weekly active full backup

For now we are using SOBR with local disk for perfomance tier / S3 for capacity tier / S3 Glacier for Archive Tier
More exactly our Veeam Backup server is hosted on AWS and we use EBS disks for performance tier.

We are planning to remove local disk for perfomance tier with S3 bucket, and I wonder if the storage usage on S3 will be the same than performance tier that we get actually on capacity tier.
But if I understand this will not be the case.
inferno66
Enthusiast
Posts: 26
Liked: 1 time
Joined: Mar 17, 2021 8:54 am
Full Name: Julien
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by inferno66 »

Also you recommand to use different bucket for performance and capacity tier
But the offload task will not take more time and more cost (api and maybe network cost for moving objects)?
Mildur
Product Manager
Posts: 7620
Liked: 1985 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by Mildur »

Hi Julien
Yes we are doing weekly active full backup
In that case disable weekly active fulls. Amazon S3 provides good stability and consistency for uploaded objects. Active Fulls are not required. You will see over time the similar space usage as in the capacity tier.
But the offload task will not take more time and more cost (api and maybe network cost for moving objects)?
The objects still has to be read from the performance tier bucket by a gateway server and then written to the capacity tier bucket. Doesn't matter if it's the same bucket or two buckets. There will be costs for API Calls and most likely the egress traffic.

Image


Best,
Fabian
Product Management Analyst @ Veeam Software
inferno66
Enthusiast
Posts: 26
Liked: 1 time
Joined: Mar 17, 2021 8:54 am
Full Name: Julien
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by inferno66 »

Hello

Understand, thanks a lot :)

On last question.
You say that we can disable weekly full backup.
But in this scenario, we don't need to have a capacity tier on the SOBR because I guess that I there's no weekly full, the offload task is unable to upload backup chain to the Capacity Tier (as a backup chain is a full backup with corresponding incremential) ?

Also for new we configure "keep certain full backups for longer archival purposes" to keep weekly for 5 weeks. If there's no weekly full, there's nothing to keep I presume (but we can change the retention policy to match this period)

We also keep monthly backup for 10 years and they are stored to the archive tier.
For this part I think we still need to make Montly Full Backup?

Regards
Mildur
Product Manager
Posts: 7620
Liked: 1985 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by Mildur »

Hi Julien
Also for new we configure "keep certain full backups for longer archival purposes" to keep weekly for 5 weeks. If there's no weekly full, there's nothing to keep I presume (but we can change the retention policy to match this period)
Keep the 5 weekly GFS retention. Just don't do weekly active full backups to save storage costs.
For object storage repositories, we don't need require a full backup to have a Weekly restore point. We just take a new incremental backup, upload the changed and new blocks as objects and tag that restore point as a new weekly backup.
But in this scenario, we don't need to have a capacity tier on the SOBR because I guess that I there's no weekly full, the offload task is unable to upload backup chain to the Capacity Tier (as a backup chain is a full backup with corresponding incremential) ?
Copy policy always works. Even when you don't enable regurlar fulls. If you enable weekly fulls, Move policy will also work.
We also keep monthly backup for 10 years and they are stored to the archive tier.
For this part I think we still need to make Montly Full Backup?
Capacity tier is optional if you want to go to archive tier and you have object storage in the performance tier. If you leave out the capacity tier, you can save a lot of money.
You must still create those monthly full backups. But it's only a tagged restore point. It's not a real full backup file as on block storage repositories. Just objects combined together from the current incremental backup and previously offloaded objects.

SOBR configuration:
- AWS S3 in performance tier
--> keep backups for your short term retention and those 5 weeklies
- AWS S3 Glacier in archive tier
--> move monthly backups and keep them for 10 years,
--> Archive GFS backups older than 4-5 weeks

Backup Job configuration:
- Target SOBR
- Short Term Retention: 30 days
- GFS retention: 5 weeklies, 120 monthlies

Best,
Fabian
Product Management Analyst @ Veeam Software
inferno66
Enthusiast
Posts: 26
Liked: 1 time
Joined: Mar 17, 2021 8:54 am
Full Name: Julien
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by inferno66 »

Hello

OK noted, thanks a lot for all this informations

So if don't do any active full backup anymore, is it recommanded (in working fine with object storage), to, on the backup job, enable "perform backup files health check" ?

Because we already had the case with Veeam V11a, to have unconsistant data on Capacity Tier, which was only visible because it was unable to perform a restore or move data to archive tier.

Task failed. Error: REST API error: 'S3 error: The specified key does not exist.
Code: NoSuchKey', error code: 404

Even if this issue was a know issue and it's supposed to be corrected, if we don't do any active full, if we encounter this error again, all restore points will be corrupted
Mildur
Product Manager
Posts: 7620
Liked: 1985 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Veeam V12 SOBR Object Storage dedup

Post by Mildur »

You're welcome.
So if don't do any active full backup anymore, is it recommended (in working fine with object storage), to, on the backup job, enable "perform backup files health check" ?
If you can, enable health checks. It's supported for object storage repositories. But please note, it will use a lot of API calls (List, Get) to read the most recent restore point in the S3 bucket for testing.
Maybe run it once per month to keep the costs low.
Even if this issue was a know issue and it's supposed to be corrected, if we don't do any active full, if we encounter this error again, all restore points will be corrupted
Instead of monthly active fulls, consider doing a backup copy to another repository. Even with object storage I would like to have a second copy in case something happens to my primary backup.

Best,
Fabian
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 15 guests