Comprehensive data protection for all workloads
Post Reply
wa15
Veteran
Posts: 323
Liked: 25 times
Joined: Jan 02, 2014 4:45 pm
Contact:

Solutions for long-term data archival

Post by wa15 »

I have about 20TB/month that I need to archive for 7 years per business requirements. Currently using tape for this, but tapes are a pain to manage. Restores are pretty rare, maybe 4 times a year at most. We are using Veeam, and I was looking at AWS and VTL as one option.

Has anybody used AWS VTL for this purpose? Any other suggestions for long-term archival, besides tapes?
Andreas Neufert
VP, Product Management
Posts: 6748
Liked: 1408 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Solutions for long-term data archival

Post by Andreas Neufert »

Hi wa15,

you can try this:
https://www.veeam.com/wp-using-aws-vtl- ... guide.html

There is maybe an option to automatically export the tapes after backup. This could be a workaround to place Veeam backups on Glacier.
nmdange
Veteran
Posts: 527
Liked: 142 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Solutions for long-term data archival

Post by nmdange »

Depending on your chance rate, using REFS fast clone could mean you wouldn't need that much storage to hold all those backups.
Andreas Neufert
VP, Product Management
Posts: 6748
Liked: 1408 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Solutions for long-term data archival

Post by Andreas Neufert »

Hi,
REFS for 7 year backup retention is not a good idea. For example if you have to migrate storage and need to copy the backup files on OS level, the REFS fast clone is broken up and you end up with the full files. There is no way in REFS yet to create the same thing or replicate it in that way. Of cause there are methods to do so, but you need to prepare this carefully.
Gostev
Chief Product Officer
Posts: 31559
Liked: 6722 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Solutions for long-term data archival

Post by Gostev »

Yes, standalone ReFS volume is not really a good fit. ReFS on Storage Space Direct however is the match made in heaven for long-term data archival purposes...
wa15
Veteran
Posts: 323
Liked: 25 times
Joined: Jan 02, 2014 4:45 pm
Contact:

Re: Solutions for long-term data archival

Post by wa15 »

Thanks all. Will look into ReFS on Storage Space Direct. I am not familiar with the technology so will have to do some reading up on it. As far as change rate, incrementals are about 200-300GB per month. Growth rate of 5% per year, max.
segfault
Enthusiast
Posts: 48
Liked: 21 times
Joined: Dec 14, 2017 8:07 pm
Full Name: John Garner
Contact:

Re: Solutions for long-term data archival

Post by segfault »

Just as a side consideration: 20TB/mo with a 7 year retention policy works out to a footprint of about 1.68PB.

If I'm doing the math right, that will cost you about $82k/year for storage in AWS Glacier at the current rate ($0.004 per GB/mo), not counting any upload or retrieval fees. If you need to do a restore it will cost you extra beyond 10GB/mo.

We had a similar scenario and realized that it was more cost effective to purchase a tape library loaded up with LTO-7 tapes. The ROI was a few months.

Tape may not be sexy compared to AWS or ReFS w/ S2D, but it sure is cost effective for long term archive. An LTO-7 tape is currently about $70 and will hold 6TB of data. AWS Glacier will charge you $24.50/month to store that. In terms of raw storage costs over 7 years, the tape is $70 vs $2,050 over the 7 year lifespan of the archive. You will need to factor in the hourly cost of having somebody to swap tapes once a month, but this is Jr Sysadmin level stuff so the cost should not be too much compared to the AWS storage cost.

--john
nmdange
Veteran
Posts: 527
Liked: 142 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Solutions for long-term data archival

Post by nmdange » 1 person likes this post

Andreas Neufert wrote:Hi,
REFS for 7 year backup retention is not a good idea. For example if you have to migrate storage and need to copy the backup files on OS level, the REFS fast clone is broken up and you end up with the full files. There is no way in REFS yet to create the same thing or replicate it in that way. Of cause there are methods to do so, but you need to prepare this carefully.
This is why I've been setting up REFS Repos in VMs so that I can move it to new hardware without losing fast clone. I haven't moved my main repository yet, but I plan on testing running NTFS dedupe on the Hyper-V host deduplicating the REFS vhdx files for extra savings!
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 151 guests