So, I have a question - is there a report that shows me the allocated space for a single backuped system (backup chain) that resides on a hardened repository?
Regarding this thread this still seems not possible.
Am I correct, or am I misunderstanding something?
For clarification, we're in the process of standing up a new v13 an infrastructure in serviceprovider environment and plan(ned) to bill the backup storage per system by real allocated space.
And if it's not possible, what could be a "workaround" for it?
Would this be possible with an object storage based repository?
Thanks in advance.
-
chaos
- Service Provider
- Posts: 17
- Liked: 1 time
- Joined: Sep 19, 2025 7:31 am
- Contact:
-
RomanK
- Veeam Software
- Posts: 837
- Liked: 228 times
- Joined: Nov 01, 2016 11:26 am
- Contact:
Re: [V13] - How to attribute used space on hardened repository to individual system?
Hello chaos,
Veeam ONE collects data from VBR. Therefore, all reported numbers should match what you see in the VBR console.
In the thread you mentioned, the repository uses XFS, which optimizes the size of backups internally. These optimized numbers are not returned to VBR. As a result, neither Veeam ONE nor VBR displays them.
If this was your question, then yes - you are correct. This is a known behavior for ReFS, XFS and dedup features.
I'm not entirely sure about object storage yet. Are you asking about something specific, or do you mean just S3-compatible storage?
There could be block-level efficiencies, but they might not be exposed. I’ll need to check with the teams to confirm.
Thanks
Veeam ONE collects data from VBR. Therefore, all reported numbers should match what you see in the VBR console.
In the thread you mentioned, the repository uses XFS, which optimizes the size of backups internally. These optimized numbers are not returned to VBR. As a result, neither Veeam ONE nor VBR displays them.
If this was your question, then yes - you are correct. This is a known behavior for ReFS, XFS and dedup features.
I'm not entirely sure about object storage yet. Are you asking about something specific, or do you mean just S3-compatible storage?
There could be block-level efficiencies, but they might not be exposed. I’ll need to check with the teams to confirm.
Thanks
-
chaos
- Service Provider
- Posts: 17
- Liked: 1 time
- Joined: Sep 19, 2025 7:31 am
- Contact:
Re: [V13] - How to attribute used space on hardened repository to individual system?
Thanks Roman,
yes, that was the question.
If you could find an answer regarding object/s3 compatible storage that would be perfect.
yes, that was the question.
If you could find an answer regarding object/s3 compatible storage that would be perfect.
-
RomanK
- Veeam Software
- Posts: 837
- Liked: 228 times
- Joined: Nov 01, 2016 11:26 am
- Contact:
Re: [V13] - How to attribute used space on hardened repository to individual system?
Hello chaos,
I have checked with QA the labs and we found the next case. Synthetic GFS backups work for object-based backups. If a GFS backup is scheduled at the end of an incremental cycle, the system will create a synthetic GFS full backup by reusing existing blocks rather than duplicating data. In practice, this means the GFS backup itself doesn’t consume additional storage space, it’s virtual.
For example, in our case, the GFS backup only contains metadata, while the actual data footprint remains unchanged. On S3, the entire backup occupies 159.81 MB. The main repository view reflects the accurate total size, even though the interface may show both the regular full and the GFS full as having the same size.
We’re actively working on improving how sizes are represented in the interface, but for now, this is expected.
Thanks
I have checked with QA the labs and we found the next case. Synthetic GFS backups work for object-based backups. If a GFS backup is scheduled at the end of an incremental cycle, the system will create a synthetic GFS full backup by reusing existing blocks rather than duplicating data. In practice, this means the GFS backup itself doesn’t consume additional storage space, it’s virtual.
For example, in our case, the GFS backup only contains metadata, while the actual data footprint remains unchanged. On S3, the entire backup occupies 159.81 MB. The main repository view reflects the accurate total size, even though the interface may show both the regular full and the GFS full as having the same size.
We’re actively working on improving how sizes are represented in the interface, but for now, this is expected.
Thanks
Who is online
Users browsing this forum: No registered users and 1 guest