Discussions related to using object storage as a backup target.
Post Reply
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Migration of Object Storage

Post by ctg49 »

We've got a fair amount of data in a Wasabi bucket that we're looking to migrate to a new bucket (background legalese reasons). New bucket will be immutable, unlike current one. Far more data in the existing bucket than can be downloaded locally. What method would make the most sense to move our existing jobs (or at least recreate without losing existing data) to a new bucket? I feel like the ripcord solution is a new folder on the existing system, new SOBR with that as the performance tier, new wasabi bucket as the capacity tier, point jobs at new SOBR. Is there a solution other than this?
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

Hello,
EDIT 2023-03-14: object lock must not be enabled afterwards. It's unsupported to enable it on an existing bucket with data in it.

[REMOVED outdated information]

Starting from scratch is always a valid scenario, yes. Depending on the complexity of your retention, it might also be "good enough" to only change the bucket in SOBR delete the old bucket after some time.

Migration at the same provider would also be supported with 3rd party copy tools. I tested S3P some time ago (designed to copy 500TB in 24h) and that worked fine.

Best regards,
Hannes
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Migration of Object Storage

Post by ctg49 »

Unfortunately no, not just object lock. We're actually moving to a different account for billing reasons, so we need to migrate the data to a new bucket.

Can we just change the bucket in the SOBR? Like, will VEEAM be smart enough to just 'figure out' that there's old stuff in the old bucket, but we need to start offloading backups to the new destination now? How does that work with synthetic fulls, if the original full is in the old bucket?

A provider copy was something else I was eyeballing, which is also doable. Again, would VEEAM be smart enough to just 'figure out' that I sent the data to a new place, pick up offloads where it left off?
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

ok, then S3P should still be valuable. If I remember correctly, the tool was built exactly for that case.

today, only one bucket is possible in SOBR (V12 allows multiple). so the old bucket will just keep the data and nothing ever happens to it. that's why I asked about retention.
ctg49
Enthusiast
Posts: 65
Liked: 45 times
Joined: Feb 14, 2018 1:47 pm
Full Name: Chris Garlington
Contact:

Re: Migration of Object Storage

Post by ctg49 »

Regarding my question, what I meant was, will VEEAM see that the old chains exist within another object storage repository, and just retain the data there for restoral purposes (if the need arises)? Or is it going to get squirrely because the backup chains aren't in the 'active' repository?
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

something that is not added cannot be seen :-)

But the data can be imported and used for restore (that can take some time depending on the size). Access to that object storage is "read-only".
marcio.defreitas
Veeam Software
Posts: 66
Liked: 9 times
Joined: Mar 06, 2017 1:59 pm
Full Name: Marcio de Freitas
Contact:

Re: Migration of Object Storage

Post by marcio.defreitas »

The procedure of Migrating Data Between Different Cloud Providers describes that we must download all data from Capacity Tier to Performance Tier and then add the new Object Storage to the SOBR to offload the data again. This procedure is described here: https://helpcenter.veeam.com/docs/backu ... -providers

However, if customer have hundreds of TB of data, maybe the Perf Tier doesnt have enough space. In this case, is it possible to use the other method (Migrating Data Between Different Buckets, described here: https://helpcenter.veeam.com/docs/backu ... nt-buckets) to copy the data from one Cloud Provider to the other and then remove the old Object Storage from the SOBR to add the new one, already populated with data?

Thanks,

Marcio
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

Hello,
it's technically possible, but unsupported.

In V12, VeeaMover will solve that challenge.

Best regards,
Hannes
HDClown
Enthusiast
Posts: 45
Liked: 6 times
Joined: Dec 27, 2012 12:25 pm
Contact:

Re: Migration of Object Storage

Post by HDClown »

Does VeeaMover officially allow and support for moving data between a non-immutable bucket to an immutable bucket?

I'm in same situation as OP: Using Wasabi for capacity tier on a bucket without object lock support, wanting to move this existing capacity tier to a new object lock supported bucket at Wasabi, and do not have the ability to move data back to performance tier.

Hoping that VeeaMover can handle this move and I don't need to try and string something together with 3rd party tools.
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

Hello,
Does VeeaMover officially allow and support for moving data between a non-immutable bucket to an immutable bucket?
yes, but you have a different scenario.

One cannot mix immutable and mutable buckets in one scale out repository in capacity tier. To get data into an immutable capacity tier, one needs a new scale out repository. With a second scale out repository, one can use VeeaMover and move "per-job". In the mount server settings of the object storage, you can configure a helper appliance that does the copy between object storage buckets.

Best regards,
Hannes
HDClown
Enthusiast
Posts: 45
Liked: 6 times
Joined: Dec 27, 2012 12:25 pm
Contact:

Re: Migration of Object Storage

Post by HDClown »

One cannot mix immutable and mutable buckets in one scale out repository in capacity tier. To get data into an immutable capacity tier, one needs a new scale out repository.
My performance tier on existing SOBR is local disk, so I will create a new SOBR using a new folder on the same local disk for performance tier + the new immutable bucket for capacity tier.
With a second scale out repository, one can use VeeaMover and move "per-job". In the mount server settings of the object storage, you can configure a helper appliance that does the copy between object storage buckets.
I assume VeeaMover would handle moving the performance tier as well as the capacity tier on the "per-job" move, and I don't need to do a manual move of performance tier jobs/re-scan process like you would normally do pre-V12 ?
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Migration of Object Storage

Post by HannesK »

it's automated yes. As always: I recommend trying out things at small scale before doing that with PBs of data.
Post Reply

Who is online

Users browsing this forum: No registered users and 14 guests