-
- Enthusiast
- Posts: 55
- Liked: 4 times
- Joined: Jul 29, 2013 2:13 am
- Full Name: Scott
- Contact:
Capacity tier help
Hi. I have read docs and searched the forum and am still stuck with offloading to S3
Current situation
Vsphere 6.7, Veeam 9.5U4, mostly Windows Server 2016 vms.
For the sake of simplicity Ill choose one Backup Job I am working with. I backup a 5TB File Server to a SOBR using Incremental and retention of 300 days. I only started 50 days ago so I have one huge VBK and 49 VIB files. Works well so far
Id like to store 250 days in an AWS S3 Capacity tier and leave 50 days on-prem.
How do I do this and more importantly what files will end up where?
Will I ever have 2x full VBK file on-prem (as this will mean I need a lot more repository storage temporarily)?
Will a full VBK ever be copied to S3 (as this will mean I need a lot more bandwidth)?
Many thanks
Scott
Current situation
Vsphere 6.7, Veeam 9.5U4, mostly Windows Server 2016 vms.
For the sake of simplicity Ill choose one Backup Job I am working with. I backup a 5TB File Server to a SOBR using Incremental and retention of 300 days. I only started 50 days ago so I have one huge VBK and 49 VIB files. Works well so far
Id like to store 250 days in an AWS S3 Capacity tier and leave 50 days on-prem.
How do I do this and more importantly what files will end up where?
Will I ever have 2x full VBK file on-prem (as this will mean I need a lot more repository storage temporarily)?
Will a full VBK ever be copied to S3 (as this will mean I need a lot more bandwidth)?
Many thanks
Scott
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Capacity tier help
Hey there Scott.
From the above it looks like you have configured Forever Forward Increments for that job. For data to be offloaded to a capacity tier extent you need to have two conditions met. First is the policy that dictates the operational restore window...and second is that the backup chain needs to be sealed. If you are doing Forever Forwards then your chain will never seal. What you need to do is configure an active/synthetic full to seal that chain. Once done the data can be offloaded.
The following links should help you further
https://www.veeam.com/kb1932
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4
From the above it looks like you have configured Forever Forward Increments for that job. For data to be offloaded to a capacity tier extent you need to have two conditions met. First is the policy that dictates the operational restore window...and second is that the backup chain needs to be sealed. If you are doing Forever Forwards then your chain will never seal. What you need to do is configure an active/synthetic full to seal that chain. Once done the data can be offloaded.
The following links should help you further
https://www.veeam.com/kb1932
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Enthusiast
- Posts: 55
- Liked: 4 times
- Joined: Jul 29, 2013 2:13 am
- Full Name: Scott
- Contact:
Re: Capacity tier help
Thanks. It was sealing the chain that had me thinking. I could setup the offloading correctly but the job needs to be changed to seal the chain.
So given I am using REFS I can do weekly synthetics without major impact on my available storage space (ie just the one 5TB file plus incrementals) but still seal the chain. Is this right?
So given I am using REFS I can do weekly synthetics without major impact on my available storage space (ie just the one 5TB file plus incrementals) but still seal the chain. Is this right?
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Capacity tier help
That's correct. For a better idea at what it might look like you can plug in your numbers into The Restore Point Simulator http://rps.dewin.me/
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Enthusiast
- Posts: 55
- Liked: 4 times
- Joined: Jul 29, 2013 2:13 am
- Full Name: Scott
- Contact:
Re: Capacity tier help
Thanks again. One last thing you may be able to assist with is network traffic. Is it only the incrementals that get sent once to S3 or does the full get sent and/or does anything ever get read or downloaded?
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Capacity tier help
We will offload the data from each backup file (full or incremental) ...however we have effective source side dedupe which won't send the same block twice. This results in space saving on the Capacity Tier and more efficient network utilization.
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Enthusiast
- Posts: 55
- Liked: 4 times
- Joined: Jul 29, 2013 2:13 am
- Full Name: Scott
- Contact:
Re: Capacity tier help
Thanks, perhaps an example?
Initial backup VBK is 1TB. Daily incrementals are 100GB. Synthetic every Sunday. 90 day retention and 60 days to Capacity tier.
At day 31 what gets uploaded?
What about day 38 (after syn full)?
Initial backup VBK is 1TB. Daily incrementals are 100GB. Synthetic every Sunday. 90 day retention and 60 days to Capacity tier.
At day 31 what gets uploaded?
What about day 38 (after syn full)?
-
- Veeam Software
- Posts: 742
- Liked: 209 times
- Joined: Jan 14, 2016 6:48 am
- Full Name: Anthony Spiteri
- Location: Perth, Australia
- Contact:
Re: Capacity tier help
It's a bit of moving target because we don't care what is contained inside the backup files...it's all based on the conditions for offload. That is to say that your retention policy dictates when things get aged out.
The Capacity Tier will ebb and flow in terms of size.
Effectively outside of that operational restore window all data will be offloaded and you will left with metadata only in the dehydrated backup files.
The Capacity Tier will ebb and flow in terms of size.
Effectively outside of that operational restore window all data will be offloaded and you will left with metadata only in the dehydrated backup files.
Anthony Spiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
Regional CTO APJ & Lead Cloud and Service Provider Technologist
Email: anthony.spiteri@veeam.com
Twitter: @anthonyspiteri
-
- Influencer
- Posts: 21
- Liked: 8 times
- Joined: Mar 14, 2014 11:16 am
- Full Name: Thomas Johnson
- Contact:
Re: Capacity tier help
Could you clarify that a little bit more, specifically with relation to multiple tiering job runs?anthonyspiteri79 wrote: ↑Mar 05, 2019 5:15 am we have effective source side dedupe which won't send the same block twice.
As a hypothetical scenario, say I do three Full Backups of a powered-off VM. The final backup size is 1TB, containing incompressible random data. The VM is powered off so there are zero changes to upload, there is no churn to consider. Label the backups A-D, with A being the oldest Full, and D being the newest Full Backup. My Capacity Tier is initially empty, and my Operational Restores Window is zero days for immediate upload of any sealed chains. Backup Retention is set to 999 days.
I start by pushing Backup A to the capacity tier. I have transmitted 1TB to the Capacity Tier, which now holds 1TB of data.
Once the initial upload is done, I run a Tiering Job. Backups B and C should now be pushed to the capacity tier. Backup D should remain in the Performance Tier, as it is the latest restore point and is not sealed.
How much data will this Tiering Job transfer? 1TB, 2TB, or just API Calls + Metadata updates?
How much data will reside in the Capacity Tier? 1TB for a perfectly-deduplicated backup, or 3TB of three identical fulls?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Capacity tier help
Transferred is only metadata describing these 2 additional restore points B and C.
Stored is 1TB that was brought in by A offload + metadata for 2 additional restore points.
So, in your own words, stored is a "perfectly-deduplicated backup"
Stored is 1TB that was brought in by A offload + metadata for 2 additional restore points.
So, in your own words, stored is a "perfectly-deduplicated backup"
-
- Influencer
- Posts: 21
- Liked: 8 times
- Joined: Mar 14, 2014 11:16 am
- Full Name: Thomas Johnson
- Contact:
Re: Capacity tier help
Thanks, that clarifies things. I've increased my retention settings, and initial results are as you've described.
Who is online
Users browsing this forum: No registered users and 8 guests