-
- Enthusiast
- Posts: 43
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Recommended Compression Level and Storage Optimization for Backup jobs to onsite S3
Hi, we recently moved from ReFS and XFS repositories to S3-Compatible repos. We have 7x250TB Extents in a single SOBR and 1x 250TB standalone repo.
In the Storage > Advanced > Storage settings of the backup job, most if not all of our jobs are set to "Compression level = Optimal (recommended)" and "Storage optimisation = 1MB (recommended)" (4MB for jobs with very large disks)
We are getting near to zero deduplication on the S3 target, meaning our 2PB S3 storage is struggling to store what our 1PB ReFS/XFS storage did (XFS especially had amazing dedupe, storing 450TB of multiple full backups of the same 75TB .vbk and only using around 150TB capacity). On S3 we're seeing 2 full backups taken days apart using 150TB (change rate around 300GB per day).
I'd welcome any job settings (or S3-compatible dedupe stats; storage side) of anyone else using an onsite S3-compatible solution.
Thanks in advance
Stu
In the Storage > Advanced > Storage settings of the backup job, most if not all of our jobs are set to "Compression level = Optimal (recommended)" and "Storage optimisation = 1MB (recommended)" (4MB for jobs with very large disks)
We are getting near to zero deduplication on the S3 target, meaning our 2PB S3 storage is struggling to store what our 1PB ReFS/XFS storage did (XFS especially had amazing dedupe, storing 450TB of multiple full backups of the same 75TB .vbk and only using around 150TB capacity). On S3 we're seeing 2 full backups taken days apart using 150TB (change rate around 300GB per day).
I'd welcome any job settings (or S3-compatible dedupe stats; storage side) of anyone else using an onsite S3-compatible solution.
Thanks in advance
Stu
-
- Product Manager
- Posts: 10374
- Liked: 2784 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Recommended Compression Level and Storage Optimization for Backup jobs to onsite S3
Hi Stuart,
There are a few things to consider when using object storage.
Backup Block Size
The default 1MB is the recommended value for the backup block size. With higher block sizes (e.g., 4MB), you will see 2-3 times larger incremental backup sizes.
However, instead of just changing from 4MB to 1MB, I strongly recommend contacting your object storage vendor to verify the recommended Veeam settings for their object storage appliance. Some vendors recommend a backup block size of 4MB for their appliance.
Immutable Repositories
Immutability will affect storage usage on object storage. When you enable immutability, backup data blocks will be stored for a longer period than expected: Job Retention + Immutability Period + Block Generation API
An example:
- 30 days job retention
- 30 days immutability period
- 10 days block generation (default setting for S3 compatible object storage)
Each backup object will be stored for 70 days. With that in mind, storage usage can be almost double compared to hardened repositories.
I recommend using our calculator to estimate your required object storage amount: Veeam Calculator
Active Full
An Active Full backup on "direct to object storage" jobs will transfer the entire amount of a full backup, similar to Hardened Repositories.
You may also contact our customer support team if you think something is not right, or if the solution is storing too much data. Our support team may be able to determine from the logs if the backup data is not being stored as it should.
Best regards,
Fabian
There are a few things to consider when using object storage.
Backup Block Size
The default 1MB is the recommended value for the backup block size. With higher block sizes (e.g., 4MB), you will see 2-3 times larger incremental backup sizes.
However, instead of just changing from 4MB to 1MB, I strongly recommend contacting your object storage vendor to verify the recommended Veeam settings for their object storage appliance. Some vendors recommend a backup block size of 4MB for their appliance.
Immutable Repositories
Immutability will affect storage usage on object storage. When you enable immutability, backup data blocks will be stored for a longer period than expected: Job Retention + Immutability Period + Block Generation API
An example:
- 30 days job retention
- 30 days immutability period
- 10 days block generation (default setting for S3 compatible object storage)
Each backup object will be stored for 70 days. With that in mind, storage usage can be almost double compared to hardened repositories.
I recommend using our calculator to estimate your required object storage amount: Veeam Calculator
Active Full
An Active Full backup on "direct to object storage" jobs will transfer the entire amount of a full backup, similar to Hardened Repositories.
You may also contact our customer support team if you think something is not right, or if the solution is storing too much data. Our support team may be able to determine from the logs if the backup data is not being stored as it should.
Best regards,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 43
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Recommended Compression Level and Storage Optimization for Backup jobs to onsite S3
Thanks Fabian, can you comment on the Compression level, with an option called "Dedupe-friendly" it's tempting to move away from the recommended "Optimal (recommended)", but cannot find much info on that. Thanks, Stu.
-
- Product Manager
- Posts: 10374
- Liked: 2784 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Recommended Compression Level and Storage Optimization for Backup jobs to onsite S3
Hi Stuart,
Dedupe-friendly compresses backups less than Optimal. Deduplication appliances have their own deduplication algorithms. By sending backups "less compressed" to such an appliance, the deduplication appliance can more easily apply global deduplication across all backup chains. However, this is not something an object storage appliance typically does for you. Most available object storage appliances do not provide global deduplication for offloaded objects.
Therefore if you want to reduce the size of your backups, you can consider High or Extreme compression. But that comes with a cost of much higher CPU usage.
Best regards,
Fabian
Dedupe-friendly compresses backups less than Optimal. Deduplication appliances have their own deduplication algorithms. By sending backups "less compressed" to such an appliance, the deduplication appliance can more easily apply global deduplication across all backup chains. However, this is not something an object storage appliance typically does for you. Most available object storage appliances do not provide global deduplication for offloaded objects.
Therefore if you want to reduce the size of your backups, you can consider High or Extreme compression. But that comes with a cost of much higher CPU usage.
Best regards,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 43
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Recommended Compression Level and Storage Optimization for Backup jobs to onsite S3
That's great Fabian, thank you.
Who is online
Users browsing this forum: No registered users and 11 guests