Discussions related to using object storage as a backup target.
Post Reply
itbackups
Novice
Posts: 3
Liked: 1 time
Joined: Mar 05, 2018 8:28 pm
Full Name: IT Support

Backup directly to AWS S3 via 3rd party gateways

Post by itbackups » 1 person likes this post

If anyone wants to backup directly to S3 you can in an unsupported way

You need a virtual environment (VMware / Hyper-V) where you can deploy an S3 SMB Gateway VM with a large cache (bigger than one full job)

Add it as a repository, then just point a job at it. I set the repository to use individual files for each VM

I found the best thing to do is a Backup Copy job with weekly GFS enabled and the "copy from source not increments" option enabled

Every week it does a full copy of the latest chain to the cache, then uploads it async, it goes slow but it seems to work great, been able to pull them back down to restore when needed without much trouble

The new option to use it for scale-out capacity is a nice option between my solution and VTL. since VTL is more archival.

I can see the reason it's scale out only is you don't want to send backup data directly to s3, the higher teirs probably act like the gateway's cache, giving the system a place to read from that is not a live VM snapshot.

I wanted my latest weekly chain in the cloud, recovering 1 month ago or later in a disaster isn't worth it for my setup

I will be testing the new option though just to see how performance is
veremin
Product Manager
Posts: 20343
Liked: 2281 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Add AWS S3 as an external repository

Post by veremin »

You can achieve pretty much the same thing by adding capacity tier to SOBR and setting operational restore window to 7 days. This way, all sealed backups older than 7 days will be automatically transferred to object storage. Thanks!
davide.asts
Influencer
Posts: 13
Liked: 3 times
Joined: Nov 04, 2016 10:12 pm
Full Name: Davide
Contact:

Re: Add AWS S3 as an external repository

Post by davide.asts »

itbackups wrote: Mar 29, 2019 3:36 pm If anyone wants to backup directly to S3 you can in an unsupported way

You need a virtual environment (VMware / Hyper-V) where you can deploy an S3 SMB Gateway VM with a large cache (bigger than one full job)
I have tried the same, but in my environment performances for the GFS Copy job were very low; how did you mount the AWS gateway? With SMB or NFS?
itbackups
Novice
Posts: 3
Liked: 1 time
Joined: Mar 05, 2018 8:28 pm
Full Name: IT Support

Re: Add AWS S3 as an external repository

Post by itbackups »

v.eremin wrote: Mar 29, 2019 6:43 pm You can achieve pretty much the same thing by adding capacity tier to SOBR and setting operational restore window to 7 days. This way, all sealed backups older than 7 days will be automatically transferred to object storage. Thanks!
Ok so I tried to set it up to achieve the same thing.

Existing setup was:
Basic Repositories on NAS1,NAS2. AWS Gateway cache on NAS3
Primary Backup Job to NAS1 runs daily
Primary Copy Job to NAS2 runs continuously
Secondary Copy Job to AWS Gateway with cache on NAS3, runs every 7 days, with weekly GFS, and copy full from source enabled (this prevents injecting increments which incurs S3 read costs)

New setup is:
Basic Repositories on NAS1,NAS2. Scale out Repository for NAS3 with Capacity Tier in S3, with Move backups older than 0 days set. (this essentially turns the performance tier into just a cache)
Primary Backup Job to NAS1 runs daily
Primary Copy Job to NAS2 runs continuously
Secondary Copy Job to Scale out Repository (NAS3 + S3), runs every 7 days, with weekly GFS and copy full from source enabled

So hopefully after the Secondary Copy completes, the system will start uploading to S3. I'll check back in a week after it's got something to backup. It's essentially the exact same packets to the same devices just different containers. NAS3 was a gateway cache, now it's essentially a cache for the scale out capacity tier.

So far the only real advantage I can see is that I don't have to supply a gateway with it's own allocated RAM. Not a huge difference but not bad either. And it's reasonably simpler to have it all on one place.

P.S. (not directly veeam related)

I use two S3 gateways, one is all NFS mounts for system level (no credentials, IP whitelist) robocopy file backups to buckets, the other is SMB mounts of the same buckets for AD user authenticated access. This means I won't actually be able to remove the SMB gateway, but that's just my use case.
itbackups
Novice
Posts: 3
Liked: 1 time
Joined: Mar 05, 2018 8:28 pm
Full Name: IT Support

Re: Add AWS S3 as an external repository

Post by itbackups »

davide.asts wrote: Apr 04, 2019 1:51 pm I have tried the same, but in my environment performances for the GFS Copy job were very low; how did you mount the AWS gateway? With SMB or NFS?
I think slow performance is unavoidable in this case. My job of ~1.5TB take 30-50 hours. I think there is some deduplication in the gateway as the jobs seem to run faster week after week.There's also the small issue that just because Veeam says the job is complete (all data is written to the gateway cache) it's still another few hours before the cache empties into the bucket.

I had done it with NFS before Amazon added SMB capability, but Veeam wouldn't talk to it directly, so I mounted it to a linux box, I could then point veeam at that an it worked, but was a workaround to a workaround so I was very glad when SMB was added.
JMcG26
Novice
Posts: 3
Liked: never
Joined: Jun 07, 2019 6:46 pm
Full Name: James McGregor
Contact:

Re: Backup directly to AWS S3 via 3rd party gateways

Post by JMcG26 »

itbackups wrote: Mar 29, 2019 3:36 pm If anyone wants to backup directly to S3 you can in an unsupported way

You need a virtual environment (VMware / Hyper-V) where you can deploy an S3 SMB Gateway VM with a large cache (bigger than one full job)

Add it as a repository, then just point a job at it. I set the repository to use individual files for each VM

I found the best thing to do is a Backup Copy job with weekly GFS enabled and the "copy from source not increments" option enabled

Every week it does a full copy of the latest chain to the cache, then uploads it async, it goes slow but it seems to work great, been able to pull them back down to restore when needed without much trouble

The new option to use it for scale-out capacity is a nice option between my solution and VTL. since VTL is more archival.

I can see the reason it's scale out only is you don't want to send backup data directly to s3, the higher teirs probably act like the gateway's cache, giving the system a place to read from that is not a live VM snapshot.

I wanted my latest weekly chain in the cloud, recovering 1 month ago or later in a disaster isn't worth it for my setup

I will be testing the new option though just to see how performance is
Excuse my ignorance, and for bumping an old thread but having issues adding the gateway as a repository. Are there any specific ways to do this? Keep getting the "Failed to get disk space error" and all the troubleshooting I have found for that doesn't work.
JMcG26
Novice
Posts: 3
Liked: never
Joined: Jun 07, 2019 6:46 pm
Full Name: James McGregor
Contact:

Re: Backup directly to AWS S3 via 3rd party gateways

Post by JMcG26 »

Nevermind, I'm an idiot.
l.vinokur
Enthusiast
Posts: 26
Liked: 10 times
Joined: Sep 25, 2017 6:37 am
Full Name: Leonid Vinokur
Contact:

Re: Add AWS S3 as an external repository

Post by l.vinokur »

itbackups wrote: Apr 05, 2019 4:14 pm Secondary Copy Job to Scale out Repository (NAS3 + S3), runs every 7 days, with weekly GFS and copy full from source enabled
Question: how did it work for you? I would imagine that once the data is copied to the NAS3 repo, it would just sit there and gets uploaded to S3 only next week when NAS3 receives a new backup chain and the old one is closed.
Post Reply

Who is online

Users browsing this forum: Amazon [Bot] and 7 guests