Discussions related to using object storage as a backup target.
Post Reply
isolated_1
Enthusiast
Posts: 44
Liked: 5 times
Joined: Apr 09, 2015 8:33 pm
Full Name: Simon Chan
Contact:

Forever Incremental Issue with Object Storage

Post by isolated_1 »

Hey folks,

Bit of in a dilemma here so wanted feedback and advice from the community.

We just got off a initial meeting with Wasabi and possibly integrating it into our Veeam solution so that we can dump backup copy jobs to cheaper online storage with minimal fuss. The problem is that currently, our Veeam repository is at near capacity (46TB consisting of one SOBR with a single extent). I have about 3TB left out of this SOBR and we actually setup a cloud connect environment (we are a SP) and I'm offloading additional jobs there. We're actually both the SP and Tenant in this case.

Anyways, after having read about how the capacity tier works, I'm failing to see how we can use this in our environment. We are only using Forever Incremental for all of our backup jobs. With only 3TB left, I don't see how I will be able to create either synthetic or active fulls to to "seal" the forever incremental chain to be able to offload that into the capacity tier. Am I correct in thinking this or would there be some other way for us to utilize object storage?

Add me as a +1 in having Veeam be able to utilize S3/object storage directly without needing to go through a SOBR. I'm also currently looking into some sort of storage gateway to use with Wasabi where it will be able to present the Wasabi storage as a simple iSCSI mount to the Veeam server. I see StarWind Storage Gateway but it utilizes VTL. I want something like AWS Storage Gateway where it's able to do this with S3 storage but for Wasabi.
HannesK
Product Manager
Posts: 14839
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Forever Incremental Issue with Object Storage

Post by HannesK »

Hello,
as you already mention, you need a sealed chain to move old data to object storage. On ReFS synthetic fulls don't cost any space if you use it. Of course Reverse Incremental also works (but that does not solve your current issue as it requires a new full backup).

For V10 you can also use object storage with non-sealed chains in copy mode. But this also does not offload data.

For direct backup to S3 storage: to quote the forum digest... "object storage will become a first class backup target citizen"... but this is nothing for the near future. It also will not help most customers. Because most customers don't have Gbit / 10 GBit and more internet connection or local S3 storages with proper bandwidth.

Best regards,
Hannes
isolated_1
Enthusiast
Posts: 44
Liked: 5 times
Joined: Apr 09, 2015 8:33 pm
Full Name: Simon Chan
Contact:

Re: Forever Incremental Issue with Object Storage

Post by isolated_1 »

Thanks Hannes!

It's times like these when I feel like I need to take a break! I'm overthinking this. You know what's the simplest way to achieve what I want? Simply be the tenant to a third party cloud connect provider! DUHHHH! I just buy the storage I need and send all of my backup copy jobs there. I don't have to mess with my current backup chain nor do I have to spend any upfront capital to purchase new hardware and software.
Rybakovas
Novice
Posts: 3
Liked: never
Joined: Mar 13, 2020 1:04 pm
Full Name: Victor Rybakovas
Contact:

Re: Forever Incremental Issue with Object Storage

Post by Rybakovas »

Hello!

I'm facing the same issue...
do you guys still have it?
or already found some fantastic solution?
In my case, I had to Change my backup police and include more active-full per week to make the chain eligible for the SOBR.
But I have a big problem with Obj Storage bandwidth.

Any idea will be great. :roll:

Wish the Best
Rybakovas
Wish the Best,
Rybakovas
veremin
Product Manager
Posts: 20406
Liked: 2298 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Forever Incremental Issue with Object Storage

Post by veremin »

What exact issue you're struggling with?

If you want to copy files produced by backup copy job to object storage, then,

- create object storage repository
- add it to Scale-Out Backup Repository as Capacity Tier
- enable copy mode in Capacity Tier settings window
- point backup copy jobs to it

As to move policy, in case of backup copy job it works only for GFS restore points.

Thanks!
Rybakovas
Novice
Posts: 3
Liked: never
Joined: Mar 13, 2020 1:04 pm
Full Name: Victor Rybakovas
Contact:

Re: Forever Incremental Issue with Object Storage

Post by Rybakovas »

Thx for Reply

Actually, I want to reduce my storage usage.
I don't know if it is possible to move the backup to capacity tier just after this backup over, using the performance tier like a "temporarily" storage.
By the documentation, just the inactive chains are able to be move.
so I try to put more fulls on my backup chain.
Backup job -> 7 copies of retention -> 2 fulls per week
in my idea, I will have my performance tier always moving to capacity

Wish the Best
Victor Rybakovas
Wish the Best,
Rybakovas
HannesK
Product Manager
Posts: 14839
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Forever Incremental Issue with Object Storage

Post by HannesK »

yes, I know customers that do that since a year... REFS + daily synthetic full... I don't like the idea, but they are happy.
Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests