Discussions specific to object storage support as backup target
Post Reply
scottf99
Enthusiast
Posts: 38
Liked: never
Joined: Jul 29, 2013 2:13 am
Full Name: Scott
Contact:

Capacity tier help

Post by scottf99 » Mar 04, 2019 11:13 pm

Hi. I have read docs and searched the forum and am still stuck with offloading to S3

Current situation
Vsphere 6.7, Veeam 9.5U4, mostly Windows Server 2016 vms.
For the sake of simplicity Ill choose one Backup Job I am working with. I backup a 5TB File Server to a SOBR using Incremental and retention of 300 days. I only started 50 days ago so I have one huge VBK and 49 VIB files. Works well so far
Id like to store 250 days in an AWS S3 Capacity tier and leave 50 days on-prem.

How do I do this and more importantly what files will end up where?
Will I ever have 2x full VBK file on-prem (as this will mean I need a lot more repository storage temporarily)?
Will a full VBK ever be copied to S3 (as this will mean I need a lot more bandwidth)?

Many thanks
Scott

anthonyspiteri79
Veeam Software
Posts: 703
Liked: 175 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Capacity tier help

Post by anthonyspiteri79 » Mar 05, 2019 2:47 am

Hey there Scott.

From the above it looks like you have configured Forever Forward Increments for that job. For data to be offloaded to a capacity tier extent you need to have two conditions met. First is the policy that dictates the operational restore window...and second is that the backup chain needs to be sealed. If you are doing Forever Forwards then your chain will never seal. What you need to do is configure an active/synthetic full to seal that chain. Once done the data can be offloaded.

The following links should help you further

https://www.veeam.com/kb1932
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4
Anthony Spiteri
Senior Global Technologist, Product Strategy
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri

scottf99
Enthusiast
Posts: 38
Liked: never
Joined: Jul 29, 2013 2:13 am
Full Name: Scott
Contact:

Re: Capacity tier help

Post by scottf99 » Mar 05, 2019 4:06 am

Thanks. It was sealing the chain that had me thinking. I could setup the offloading correctly but the job needs to be changed to seal the chain.
So given I am using REFS I can do weekly synthetics without major impact on my available storage space (ie just the one 5TB file plus incrementals) but still seal the chain. Is this right?

anthonyspiteri79
Veeam Software
Posts: 703
Liked: 175 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Capacity tier help

Post by anthonyspiteri79 » Mar 05, 2019 5:04 am

That's correct. For a better idea at what it might look like you can plug in your numbers into The Restore Point Simulator http://rps.dewin.me/
Anthony Spiteri
Senior Global Technologist, Product Strategy
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri

scottf99
Enthusiast
Posts: 38
Liked: never
Joined: Jul 29, 2013 2:13 am
Full Name: Scott
Contact:

Re: Capacity tier help

Post by scottf99 » Mar 05, 2019 5:10 am

Thanks again. One last thing you may be able to assist with is network traffic. Is it only the incrementals that get sent once to S3 or does the full get sent and/or does anything ever get read or downloaded?

anthonyspiteri79
Veeam Software
Posts: 703
Liked: 175 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Capacity tier help

Post by anthonyspiteri79 » Mar 05, 2019 5:15 am

We will offload the data from each backup file (full or incremental) ...however we have effective source side dedupe which won't send the same block twice. This results in space saving on the Capacity Tier and more efficient network utilization.
Anthony Spiteri
Senior Global Technologist, Product Strategy
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri

scottf99
Enthusiast
Posts: 38
Liked: never
Joined: Jul 29, 2013 2:13 am
Full Name: Scott
Contact:

Re: Capacity tier help

Post by scottf99 » Mar 05, 2019 5:26 am

Thanks, perhaps an example?
Initial backup VBK is 1TB. Daily incrementals are 100GB. Synthetic every Sunday. 90 day retention and 60 days to Capacity tier.
At day 31 what gets uploaded?
What about day 38 (after syn full)?

anthonyspiteri79
Veeam Software
Posts: 703
Liked: 175 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Capacity tier help

Post by anthonyspiteri79 » Mar 05, 2019 5:53 am

It's a bit of moving target because we don't care what is contained inside the backup files...it's all based on the conditions for offload. That is to say that your retention policy dictates when things get aged out.

The Capacity Tier will ebb and flow in terms of size.

Effectively outside of that operational restore window all data will be offloaded and you will left with metadata only in the dehydrated backup files.
Anthony Spiteri
Senior Global Technologist, Product Strategy
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri

NTmatter
Influencer
Posts: 19
Liked: 8 times
Joined: Mar 14, 2014 11:16 am
Full Name: Thomas Johnson
Contact:

Re: Capacity tier help

Post by NTmatter » May 14, 2019 9:31 am

anthonyspiteri79 wrote:
Mar 05, 2019 5:15 am
we have effective source side dedupe which won't send the same block twice.
Could you clarify that a little bit more, specifically with relation to multiple tiering job runs?

As a hypothetical scenario, say I do three Full Backups of a powered-off VM. The final backup size is 1TB, containing incompressible random data. The VM is powered off so there are zero changes to upload, there is no churn to consider. Label the backups A-D, with A being the oldest Full, and D being the newest Full Backup. My Capacity Tier is initially empty, and my Operational Restores Window is zero days for immediate upload of any sealed chains. Backup Retention is set to 999 days.

I start by pushing Backup A to the capacity tier. I have transmitted 1TB to the Capacity Tier, which now holds 1TB of data.

Once the initial upload is done, I run a Tiering Job. Backups B and C should now be pushed to the capacity tier. Backup D should remain in the Performance Tier, as it is the latest restore point and is not sealed.

How much data will this Tiering Job transfer? 1TB, 2TB, or just API Calls + Metadata updates?

How much data will reside in the Capacity Tier? 1TB for a perfectly-deduplicated backup, or 3TB of three identical fulls?

Gostev
SVP, Product Management
Posts: 25837
Liked: 3981 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Capacity tier help

Post by Gostev » May 14, 2019 12:51 pm 1 person likes this post

Transferred is only metadata describing these 2 additional restore points B and C.

Stored is 1TB that was brought in by A offload + metadata for 2 additional restore points.
So, in your own words, stored is a "perfectly-deduplicated backup" :D

NTmatter
Influencer
Posts: 19
Liked: 8 times
Joined: Mar 14, 2014 11:16 am
Full Name: Thomas Johnson
Contact:

Re: Capacity tier help

Post by NTmatter » May 28, 2019 12:54 pm 2 people like this post

Thanks, that clarifies things. I've increased my retention settings, and initial results are as you've described.

Post Reply

Who is online

Users browsing this forum: No registered users and 9 guests