Using object storage as a backup target
Post Reply
DDIT
Expert
Posts: 123
Liked: 23 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Moving BCJ GFS restores to S3: How to?

Post by DDIT »

Hi,

Before v11 my setup was...
Backup Job: on-prem repo (14 restore points) >> BCJ: off-site repo (GFS)

Having recently upgraded to v11, my setup is now simplified to...
Backup Job to SOBR...
Perfromance Tier: on-prem repo (job settings are 14 restore points + GFS)
Capacity Tier: S3-compatible (copy and move (14 days))

This is working well, so I have disabled the BCJ and now wish to move the off-site GFS restore points to S3 objects storage (capacity tier), ideally the same bucket too. Is the best way...

Add the off-site repo as a performance tier, put in maintenance mode, then evacuate/move the restores to object?

Is this possible? And, if doing this, will the evacuation/move happen directly between the off-site repo and S3, or will traffic traverse my VBR server? I have ~50TB in the off-site repo, so would prefer to transfer directly.

Thanks in advance.

HannesK
Veeam Software
Posts: 9736
Liked: 1812 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Moving BCJ GFS restores to S3: How to?

Post by HannesK » 1 person likes this post

Hello,
if you want to bring data from a repository to object storage, then creating a SOBR with performance tier and capacity tier is the way to go, yes.

As it is two different entities / locations, I would use two buckets (there is no hard technical fact behind that. it just sounds better to me). The transfer to object storage goes via the SOBR extent mountserver or the configured gateway server of the object storage.

Best regards,
Hannes

DDIT
Expert
Posts: 123
Liked: 23 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Re: Moving BCJ GFS restores to S3: How to?

Post by DDIT » 1 person likes this post

@HannesK, thank you. I followed your advice, created the new S3 bucket, added that and the off-site repo to a new SOBR (configuring the off-site server as the gateway). I set the Capacity Tier option to move backups older than 1 day. So far, so good. It's working as expected. Over 50% moved so far.

The backups that have moved are showing up in Home > Backups > Object Storage, as expected.

Thanks again.

veremin
Product Manager
Posts: 19073
Liked: 1947 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Moving BCJ GFS restores to S3: How to?

Post by veremin »

I set the Capacity Tier option to move backups older than 1 day.
Just wondering about the overall goal of this setup - with such short operational restore window only latest backup chain (around 7 restore points) will be present on local repository, while originally you were planning to have 14 daily restore points on both performance and capacity tiers and GFS ones on capacity tier only.

Thanks!

DDIT
Expert
Posts: 123
Liked: 23 times
Joined: Oct 29, 2015 5:58 pm
Full Name: Michael Yorke
Contact:

Re: Moving BCJ GFS restores to S3: How to?

Post by DDIT »

@Veremin, I simply want to retire the off-site repo, so need to move GFS restore points to other storage. When all moved I will decommission the repo and remove the BCJ job, because I am now handling GFS within a regular backup job.
To offload the GFS restore points, I created a new, separate, SOBR just for doing this. Performance tier is off-site repo holding the GFS restore points, capacity tier is S3. The 1 day option effectively means all GFS restore points will get moved.

Separate to this, I have another SOBR targetted by regular backups jobs. The backup job has retention for 14 daily restore points with GFS configured, performance tier is on-site repo, capacity tier is S3 (separate bucket to the one above).

Apologies if my original post wasn't clear - I hope this explanation makes sense and my approach is the best way?

Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests