I'm seeing discrepancies with the storage used on our SOBRs and was wondering if anyone had any insights.
Keeping things simple, 6 month backup retention with GFS points, to a SOBR with a local performance tier consisting of an ReFS 64k volume for block cloning, and a dedicated hot blob immutable capacity tier in Azure. SOBR set to copy immediately. Expectation, and results, are that you will end up with 6 months locally in the performance tier, and 6 months in the capacity tier. We aren't using copy jobs, just a standard 2 tier SOBR, with backup jobs rolling to it.
This works, and they stay in sync, and when reviewing backups in VBR console, it shows a matching number of backups for all VMs, both in performance and capacity tiers. We aren't using any settings to move backups from performance to capacity after X days. So that's not an issue. And it's not hitting the 90% full, so no emergency moves.
However, when calculating space used, it seems that there is almost always around 35-40% more storage used in the hot blob capacity tier, than the local ReFS performance tier. I always understood, perhaps incorrectly, that savings from block cloning or compression/dedup would be similar across all tiers. But since on all of our VBRs this discrepancy appears consistently around 35-40%, I must be missing something.
Example:
Performance Tier, Local ReFS 64k: Capacity 36.3TB, Free 18.8TB, Used Space 63.5TB. When you subtract 18.8 free from 36.3 capacity, you get actual used space on the performance tier of 17.5TB.
Capacity Tier used: 24.1TB, which is ~40% more than the local 17.5TB used ReFS volume that it's mirroring.
Is this discrepancy expected? Does anyone know the cause? I'm guessing there are some aspects of block cloning that don't translate to blob storage? Thanks!
-
- Service Provider
- Posts: 198
- Liked: 41 times
- Joined: Oct 28, 2019 7:10 pm
- Full Name: Rob Miller
- Contact:
-
- Service Provider
- Posts: 198
- Liked: 41 times
- Joined: Oct 28, 2019 7:10 pm
- Full Name: Rob Miller
- Contact:
Re: SOBR Used Space Discrepancy: Performance vs Capacity
This is another example. This SOBR is part of a larger deployment. We copy backups to this SOBR once per day and keep them for 90 days. It has 2 capacity extents for scaling, so it splits the backups between them. Every job copied to this has a 90 day retention. But the difference in total space used between performance and capacity is significant. 70TB used in performance, but 121TB used in capacity.


-
- Novice
- Posts: 5
- Liked: never
- Joined: Aug 15, 2022 9:26 am
- Full Name: Mikael Norman
- Contact:
Re: SOBR Used Space Discrepancy: Performance vs Capacity
Hi Rob,
are you using Immutable on Blob/S3?
are you using Immutable on Blob/S3?
-
- Novice
- Posts: 5
- Liked: never
- Joined: Aug 15, 2022 9:26 am
- Full Name: Mikael Norman
- Contact:
Re: SOBR Used Space Discrepancy: Performance vs Capacity
I'm obviously lacking reading skills today; " and a dedicated hot blob immutable capacity tier in Azure"
Then you are probably running into the same issue as we are with direct to Object.
Last year (march/april) we started to move from Hardened Repositories to direct to Object Store. As the time went on the amount of stored data compared to what was stored on Hardened Repositories grew much more than anticipated.
We knew about the 10 days Block Generation adding storage requirements but it grew much more than that.
The cause was eventually found on a page added with/for V12.1; https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Object Storage Actual Retention
Important
Consider the following:
Although the immutability period does not depend on the retention of the backup chain, it will preserve more restore points in addition to the restore points stored according to a retention policy. This behavior guarantees consistency of earlier states of the backup chain and the ability to roll back to it.
Actual retention = job retention policy + immutability period + Block Generation period
This information is something we have not seen or read before so we were unaware of the added days for immutability period.
As we want to save all data we of course set the Immutability to the same as retention thus the amount of extra data we need to store is large.
Right now a single example for a job:
The repository is storing 7,5 TB.
UI says 4,92 TB (right click under Backups for the job in question).
Looking back at job history average seems to be about 60GB/day changerate so 31x 60GB = 2460 GB which matches up to what we see actually stored.
In this case (with fairly low change rate) we have a storage increase of a not so insignificant 50%.
To me this seems like a workaround for cases where the days of immutability is lower than the retention and there is a risk that the full backup might be affected (= the whole chain). Thus a roll back can be performed to a previous point.
But this does not seem to be needed if immutability matches retention, then there is no need to ever do a roll back.
Thus the formula "Actual retention = job retention policy + immutability period + Block Generation period" should be more dynamic or even simpler, immutability should never be able to be lower than retention time.
Looking at this post regarding visibility of immutability there are changes coming
post513356.html#p513356
As Egor noted earlier, the complexity with the current immutability model makes showing the immutabilty date too difficult.
But he and Anton both mentioned that in the future our "time-based" immutability model will allow for this capability:
Egor Yakovlev wrote: ↑Jan 04, 2024 12:08 pm
However, once we switch from "restore points" logic to "time-based" retention in the future, that problem will go away and we will be able to add true immutability dates to the UI for backups kept on object storage.
Thanks
Steve
@Veeam
Is this the way it is going to be or is it something that will be fixed in the future with time-based immutability (or even right now with a Registry Key)?
Then you are probably running into the same issue as we are with direct to Object.
Last year (march/april) we started to move from Hardened Repositories to direct to Object Store. As the time went on the amount of stored data compared to what was stored on Hardened Repositories grew much more than anticipated.
We knew about the 10 days Block Generation adding storage requirements but it grew much more than that.
The cause was eventually found on a page added with/for V12.1; https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Object Storage Actual Retention
Important
Consider the following:
Although the immutability period does not depend on the retention of the backup chain, it will preserve more restore points in addition to the restore points stored according to a retention policy. This behavior guarantees consistency of earlier states of the backup chain and the ability to roll back to it.
Actual retention = job retention policy + immutability period + Block Generation period
This information is something we have not seen or read before so we were unaware of the added days for immutability period.
As we want to save all data we of course set the Immutability to the same as retention thus the amount of extra data we need to store is large.
Right now a single example for a job:
The repository is storing 7,5 TB.
UI says 4,92 TB (right click under Backups for the job in question).
Looking back at job history average seems to be about 60GB/day changerate so 31x 60GB = 2460 GB which matches up to what we see actually stored.
In this case (with fairly low change rate) we have a storage increase of a not so insignificant 50%.
To me this seems like a workaround for cases where the days of immutability is lower than the retention and there is a risk that the full backup might be affected (= the whole chain). Thus a roll back can be performed to a previous point.
But this does not seem to be needed if immutability matches retention, then there is no need to ever do a roll back.
Thus the formula "Actual retention = job retention policy + immutability period + Block Generation period" should be more dynamic or even simpler, immutability should never be able to be lower than retention time.
Looking at this post regarding visibility of immutability there are changes coming
post513356.html#p513356
As Egor noted earlier, the complexity with the current immutability model makes showing the immutabilty date too difficult.
But he and Anton both mentioned that in the future our "time-based" immutability model will allow for this capability:
Egor Yakovlev wrote: ↑Jan 04, 2024 12:08 pm
However, once we switch from "restore points" logic to "time-based" retention in the future, that problem will go away and we will be able to add true immutability dates to the UI for backups kept on object storage.
Thanks
Steve
@Veeam
Is this the way it is going to be or is it something that will be fixed in the future with time-based immutability (or even right now with a Registry Key)?
Who is online
Users browsing this forum: No registered users and 1 guest