Hi All,
I have done a cursory search of the forum, and while no doubt my question has been answered somewhere, I couldn't find the answer so forgive me if it's been asked before.
I have a customer who was doing backups to their local repo with 15 points in forever forward. They wanted to add 5 days of immutability and offload to S3. We added some S3 compatible storage and converted their repo into a SOBR and enabled copy offload since they wanted immediate immutability.
Their 15 points spread out across ~7 jobs totaled about 2.3TB in their local repository. However, after a week or so, the S3 storage was consuming about 3.7TB. Not unexpected it would consume more, but it's more than the customer and I expected.
I read and re-read the documentation I could find on S3 immutability with Veeam, however, I'm not finding any explanation on how it handles Forever Forward in copy mode. I suspect it's not really designed to handle it. And indeed in the notes for the other immutable option - Linux Hardened Repository - it specifically advised Forever Forward is not supported.
I've had a case open for this (05105187) however, support was unable to really explain it either. We settled on a good rule of thumb would be to double the local storage. The case was closed and while this was a satisfactory answer for the customer, it wasn't satisfactory for me and I know it'll come up again soon enough.
Any insight that someone could provide would be invaluable!
Thanks!
-
- Service Provider
- Posts: 20
- Liked: 5 times
- Joined: Jul 02, 2019 8:06 pm
- Full Name: Michael anderson
- Contact:
-
- Product Manager
- Posts: 14840
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Calculating Storage Usage / Understand Consumption - S3 Copy with Forever Forward
Hello,
with only 15 restore points, using significantly more sounds expected to me. For immutability in object storage, we extend immutability always for 10 days to find a good balance between "disk space usage" and "API costs". Reducing the value might save them some disk space, but will increase API costs.
Hardened Repository on Linux works completely different. It has backup files like any other repository (object storage has tons of objects instead of a few backup files). Hardened Repository makes a backup file immutable. Forever Forward always changes the full with every backup run. So that merge is incompatible with an immutable backup file and Hardened Repository requires synthetic or active full because of that.
Support is available to fix broken things - not to explain how the software was designed. I see they verified that everything works correctly, so that sounds good so far
Best regards,
Hannes
with only 15 restore points, using significantly more sounds expected to me. For immutability in object storage, we extend immutability always for 10 days to find a good balance between "disk space usage" and "API costs". Reducing the value might save them some disk space, but will increase API costs.
Hardened Repository on Linux works completely different. It has backup files like any other repository (object storage has tons of objects instead of a few backup files). Hardened Repository makes a backup file immutable. Forever Forward always changes the full with every backup run. So that merge is incompatible with an immutable backup file and Hardened Repository requires synthetic or active full because of that.
Support is available to fix broken things - not to explain how the software was designed. I see they verified that everything works correctly, so that sounds good so far
Best regards,
Hannes
-
- Service Provider
- Posts: 20
- Liked: 5 times
- Joined: Jul 02, 2019 8:06 pm
- Full Name: Michael anderson
- Contact:
Re: Calculating Storage Usage / Understand Consumption - S3 Copy with Forever Forward
Hi Hannes,
So if I understand correctly, it just stretches the 15 day chain to a 30 day chain in this case? Because 5 set + 10 from Veeam + the existing 15 points?
I still think double is very high as the increments aren't particularly large and the fulls usually make up the bulk of most forever forward chains.
I assume in the case of object storage Forever Forward chains, the full is technically recreated every day, since merges, but given the object storage essentially dedupes, those new fulls don't take up as much space?
Could this explain some of the doubling?
So if I understand correctly, it just stretches the 15 day chain to a 30 day chain in this case? Because 5 set + 10 from Veeam + the existing 15 points?
I still think double is very high as the increments aren't particularly large and the fulls usually make up the bulk of most forever forward chains.
I assume in the case of object storage Forever Forward chains, the full is technically recreated every day, since merges, but given the object storage essentially dedupes, those new fulls don't take up as much space?
Could this explain some of the doubling?
-
- Product Manager
- Posts: 14840
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Calculating Storage Usage / Understand Consumption - S3 Copy with Forever Forward
Hello,
more like from 15 to "up to 20" (the background stuff is a bit complex). I agree, 60% more sounds much as it should be more like 33% with my "rule of thumb calculation". But as support confirmed that the immutability is applied correctly, I can only guess that there were larger incrementals somewhen in between.
The backup mode is irrelevant. Object storage is used incremental forever at Veeam. Even if you schedule active or synthetic full every day, we only upload the incremental data to object storage. Full backups in object storage are metadata operations in the end.
Best regards,
Hannes
more like from 15 to "up to 20" (the background stuff is a bit complex). I agree, 60% more sounds much as it should be more like 33% with my "rule of thumb calculation". But as support confirmed that the immutability is applied correctly, I can only guess that there were larger incrementals somewhen in between.
The backup mode is irrelevant. Object storage is used incremental forever at Veeam. Even if you schedule active or synthetic full every day, we only upload the incremental data to object storage. Full backups in object storage are metadata operations in the end.
Best regards,
Hannes
Who is online
Users browsing this forum: No registered users and 6 guests