Discussions related to using object storage as a backup target.
Post Reply
evilaedmin
Expert
Posts: 176
Liked: 30 times
Joined: Jul 26, 2018 8:04 pm
Full Name: Eugene V
Contact:

Backup chains and calculating cloud capacity

Post by evilaedmin »

When using cloud storage with a backup which has a very low change rate, is there a way to avoid having to generate / upload new full backups?

We have a subset of our data with a very, very low change rate (below 0.5%, often much lower). However the change rate is not zero. The base data set is approximate 200TB. We use a deduplicated storage target so are used to efficient synthetic fulls.

Hypothetically if we were to move this to either v9.5u4 Cloud Tier or the new Cloud storage options in v10, what are our options for not ever having to upload the base 200TB data?

In a legacy tape environment we would do a level 1-9 backup, which for many products would reset the chain of incrementals back to the latest full + level 1 + new chain of incrementals would be started.

This would allow us to go forward without making a new full for a much longer period than a "forever" growing chain of full + incr + incr....
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Backup chains and calculating cloud capacity

Post by veremin »

How many full backups are there? If there are more than one, then, you can make Cloud Tier copy only the latest backup chain (the one starts from the latest full backup).

The said option is given in VB&R v10, when you enable Cloud Tier Copy mode for the first time.

Thanks!
evilaedmin
Expert
Posts: 176
Liked: 30 times
Joined: Jul 26, 2018 8:04 pm
Full Name: Eugene V
Contact:

Re: Backup chains and calculating cloud capacity

Post by evilaedmin »

@Veremin With our Storeonce storage target we are generating a synthetic full once-per-week, I think if we had a ReFS based repository we might only do synthetic full once per month. I suppose this is one of the gotchas of using a deduplicating appliance as a "primary" type of storage repository? I know since our purchase both Veeam and HPE have clarified that deduplicating appliances are not recommended for a "Performance tier" type use case.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Backup chains and calculating cloud capacity

Post by veremin » 1 person likes this post

With such setup the best idea will be to select "copy only the latest chain" option while enabling copy mode on your Capacity Tier.

After initial copy all offload activities to object storage will be forever incremental. Not only this - the blocks within backup chain already copied to object storage won't be copied again. Basically, Capacity Tier will work in REFS-based repository way.

Thanks!
RossFawcett
Service Provider
Posts: 18
Liked: 4 times
Joined: Jul 14, 2014 8:49 am
Full Name: Ross Fawcett
Location: Perth, Western Australia
Contact:

Re: Backup chains and calculating cloud capacity

Post by RossFawcett »

After initial copy all offload activities to object storage will be forever incremental. Not only this - the blocks within backup chain already copied to object storage won't be copied again. Basically, Capacity Tier will work in REFS-based repository way.
I've been testing v10 today and that's not what we have been seeing in terms of it working in a similar way to REFS where we get block clone capabilities. Very simple setup with a fresh v10 install. Single job with two VM's backing up from ESXi. Retention policy on the backup job is set to 7 restore points (for testing purposes) and the job is set to forever forward. E.g. incremental is selected, with no synthetic full backup configured. This job sends its backups to the SOBR repository. The SOBR repository is configured with two extents, one is a simple NAS (technically a share off the existing backup server, windows 2016 to windows 2016 SMB), and an Azure BLOB as the capacity tier extent. This is configured with only copy backups enabled.

What we see from a job perspective is the initial backup job runs, I have run it a few times to get us to the 7 restore points. So backup 8 runs, the job completes and it merges the backup to get back to 7 restore points on disk, and the UI shows approximately 250MB transferred. We then two SOBR tiering jobs start up, one kicks off and we can see it transferring what appears to be a VIB file, which makes sense as an incremental has run, and it sends about 292MB to Azure. Then the second SOBR tiering job starts, and we see it trying to now send a VBK which it then sends a full 30GB which is approximately what the full backup is on disk.
Gostev
Chief Product Officer
Posts: 31533
Liked: 6703 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup chains and calculating cloud capacity

Post by Gostev » 1 person likes this post

This is not expected.
RossFawcett
Service Provider
Posts: 18
Liked: 4 times
Joined: Jul 14, 2014 8:49 am
Full Name: Ross Fawcett
Location: Perth, Western Australia
Contact:

Re: Backup chains and calculating cloud capacity

Post by RossFawcett » 1 person likes this post

No worries, I've opened a case this morning #03987430

It is a test server so the production Veeam 9.5 U4 instance is working well, but we are obviously keen to implement v10 with the new features available in the product.
Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests