Using tape as a backup target
Post Reply
damadhatter
Service Provider
Posts: 45
Liked: 2 times
Joined: Feb 21, 2014 5:15 am
Full Name: Chris A
Contact:

GFS backups to AWS Deep Archive

Post by damadhatter »

I have a customer who requires some long term data retention of their VMWare environment (Currently 2TB, to grow 50% in next 6 months). We are using Veeam B&R to backup to a local repository (a ESXi host dedicated to a Veeam B&R VM, with storage locally). We keep about 45 days of data locally but no GFS backups (YET!) We also send this customers data off to a Cloud Repository (I am the provider) via backup copy job, keeping only a week of data offsite.

I have gone down the rabbit hole of looking into utilizing GFS with AWS Glacier / Glacier Deep Archive for storing data more long term. The goal being to keep 12 months & 7-10 years of GFS backups. From what I have found out I need to let Veeam manage the data in AWS, I cannot move from S3 to Glacier using Lifecycle policies. Thus no use of AWS object storage with a SOBR, the first rabbit hole I went down. VTL looks like what I need to be using.

I will be needing an AWS Tape Gateway VTL appliance. I see you can install locally as a VMWare OVF or they have an actual physical appliance. Is there an option that does not require anything local aside from Veeam B&R? I saw something mention EC2 but was not sure if that even made sense to go that route. I am leaning the route of the VMWare appliance.

Is it bad practice to have the VTL Gateway appliance & my Veeam B&R both reside on the same physical VMWare host if I have enough resources? The one positive I see here is Veeam B&R needs an iSCSI connection to the VTL Gateway appliance, deploying on the same host should eliminate this traffic from leaving that host.

I see the data first hits S3 and then is moved to Glacier or Deep Archive pool depending on how I configure my tapes. How long does this data sit in S3 before being moved over? Any best practices on tape sizing?

Am I just wasting my time here or doing it all wrong? 😊

soncscy
Veeam Legend
Posts: 370
Liked: 182 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey Carel
Contact:

Re: GFS backups to AWS Deep Archive

Post by soncscy » 1 person likes this post

Hey Chris,

Just fyi, v11 has Archive Tier which lands on Glacier: https://helpcenter.veeam.com/docs/backu ... ml?ver=110

This is 10x better than dealing with AWS VTL, trust me. Maybe revisit scale out repos in v11?

damadhatter
Service Provider
Posts: 45
Liked: 2 times
Joined: Feb 21, 2014 5:15 am
Full Name: Chris A
Contact:

Re: GFS backups to AWS Deep Archive

Post by damadhatter »

When creating the backup repository for my SOBR extent I am using Object Storage > Amazon S3 > Amazon S3 Glacier? When trying to set this up I am able to select my Data Center, find my Bucket & Folder. I select to use Deep Archive and I get the error "Insufficient AWS EC2 permissions". What is this doing? Trying to set up a EC2 instance? I don't see any mention of this here: https://helpcenter.veeam.com/docs/backu ... ml?ver=110 Am I going down the wrong path again? :) I tried support but the person I spoke with didn't know anything about this new feature. Case #04783940

***EDIT***

Well support actually just got back to me haha! Looks like this does spin up a EC2 instance and I was looking in the wrong part of the manual! Do you see this EC2 appliance costing much each month to run? What type of instance do you typically run? I was looking at the VTL because I could bring the compute local (VTL Gateway) vs using EC2 and paying the extra $$$ each month.

veremin
Product Manager
Posts: 18421
Liked: 1822 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: GFS backups to AWS Deep Archive

Post by veremin » 1 person likes this post

I select to use Deep Archive and I get the error "Insufficient AWS EC2 permissions".
Can you confirm that the specified account has all required permissions mentioned here?
Do you see this EC2 appliance costing much each month to run?
A proxy appliance is used to transfer data from Capacity to Archive Tier. It gets smaller objects from Capacity Tier and combines them into bigger ones to avoid additional solution costs. The proxy appliance gets deployed, once the offload starts, and gets removed, after it finishes. So, appliance usage does not cost much.

Post Reply

Who is online

Users browsing this forum: Majestic-12 [Bot] and 7 guests