Discussions related to using object storage as a backup target.
Post Reply
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Capacity Tier Architecture and Some Questions

Post by jtupeck »

Well, I typed out this long post and then pressed submit....it all disappeared because I wasn't logged in yet, so here goes at attempt #2 to relay some info about my environment and some questions I have regarding the Veeam Capacity Tier.

I have been doing a lot of reading on the new Capacity Tier in 9.5 U4 and have some thoughts/questions after deploying it in a test scenario in our environment.

Our environment has set up in the following manner for a number of years with great success:

1. Backup jobs run either once or twice a day, depending on policy and are stored on an S3260 appliance configured in a 3 volume SOBR (60TB/volume) and retained locally on that repository for 14 days
2. Backup Copy jobs recycle every 24 hours, copying all restore points to a Data Domain appliance where 14 daily, 2 weekly and either 1,6 or 12 monthly restore points are retained, based on policy.
3. The Data Domain repository data is automatically de-duplicated and replicated to a similarly sized, peer Data Domain appliance in the off-site, DR facility.
4. All Veeam jobs and retention values are configured by policy based vSphere Tags managed and synced by Veeam ONE

This architecture has served us VERY well for a number of years now, with incremental benefits as Veeam software has matured. The Data Domain platform has been used as a longer term archive tier and as the transport layer for the offsite copy of data due to its massive de-duplication capabilities in our environment and reduction of WAN traffic needed to get data offsite. With DD's ability to connect to fiber channel with DDBOOST technology, all our backups stay on the FC layer, vs. the IP network and performance is amazing. The above setup provides us with 4 copies of data (production, S3260, production site Data Domain, DR site Data Domain), two storage platforms for backups (S3260, Data Domain) and 1 off-site copy.

Enter Azure as the Capacity Tier.

The new setup that I am introducing is mainly just a change to item #3, above. I am currently testing one of my Backup Copy jobs with a SOBR that now consists of the production site Data Domain and an Azure blob volume as the Capacity Tier. This is where I want to make sure I understand things and see if I can either optimize the infrastructure, or have any gaps in my understanding.

With all settings the same as described above, and the Capacity Tier set with a 14 day threshold; if my understanding is correct, sealed chains older than 14 days (basically weekly and monthly restore points, right?) will be offloaded from the production Data Domain to Azure. This will also cause a chain reaction, where the files that are moved/dehydrated from the Data Domain also being removed/dehydrated from the DR site Data Domain appliance as well, correct? In this case, is the DR site data domain even necessary, at this point? The only use case I can see for it at this point, with Azure as a Capacity Tier in the SOBR, is that the DR site Data Domain would have offsite copies of my 14 daily backups, as well...right? If so, should I eliminate the DR site Data Domain, OR, as another thought, should/could I just eliminate the production site Data Domain and put the DR appliance into the Backup Copy job SOBR, pushing that data across the WAN (assuming it can support this without the de-duplication we enjoy now) and still retaining offsite copies of my daily restore points?

In short, the Capacity Tier is giving me additional things to think about and I wanted to throw this out there to see what thoughts and opinions other users and Veeam employees might have on the matter.
Andreas Neufert
VP, Product Management
Posts: 7076
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Capacity Tier Architecture and Some Questions

Post by Andreas Neufert » 1 person likes this post

SBOR Archive Tier will offload older restore points to the cloud and leave only a pointer file behind on the local SOBR. It is not an additional copy in this first version. It is just something that allow you to use high performance (expensive) storage to store latest backups (fast backup & restore) and then offload older restore points to the Object Storage (cheaper).

With your DataDomains you create copies that you can use.

Check next major version and the Cloud Tier features, there you will have a better set of features for your scenario and requirement.
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Re: Capacity Tier Architecture and Some Questions

Post by jtupeck »

@Andreas - Correct. I understand both of your points and I think I accurately called both of them out in my summary of my understanding in my post.

Mainly I am looking for input on my biggest question, "Is the DR site Data Domain necessary at this point, with this architecture?" I am leaning towards 'yes', the more I think about it, because if I am right, the DR Data Domain is still the only off-site copy of the most recent 14 days of backups, because those are not hitting the Capacity Tier at all....only weekly/monthly copies will be offloaded to Azure.
With your DataDomains you create copies that you can use.
Arguably so, I suppose...but Data Domain restores are extremely slow (read: nearly useless for large amounts of data restore and/or instant VM recovery) and therefore not really usable as a primary backup target, hence why we have a performance tier SOBR for initial backups. EMC support of a Data Domain is also a very large annual cost after year 3, which is why I am looking to reduce our footprint by offloading some of the older and less-likely-to-be-used backup data to Azure. I can count on one hand how many times I have had to restore data from Data Domain due to it being outside the 14 day initial period of retention on performance disk without block dedupe (which is what makes the DD arrays so slow)...so the cost benefit of cloud tiering the older daya may equate to a savings overall if we can either eliminate one of the DD arrays, or even just move to a reduced DD footprint if we still feel we need one in each data center due to the 14 daily backup files needing an offsite version that is still more usable than cloud tiered storage.
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Re: Capacity Tier Architecture and Some Questions

Post by jtupeck »

Does anyone have any experience with an Exagrid solution in the above, or a similar architectural scenario? How did you adopt the Capacity Tier in U4 with Exagrid architecture? Could we potentially consolidate from an S3260 and two Data Domain arrays with similar capacity on each end to an Exagrid build at each site? Or, would you still put the S3260 in for performance backup target with direct storage access over fiber channel? Exagrid does not support FC, if I remember correctly... and that tends to be a big selling point for us.

I have an upcoming discussion with my area rep for Exagrid and I may begin exploring their technology as well and how it might work for us. Overall goal is to reduce costs of maintenance per annum and reduce the complexity of the environment, but we have had our current setup for so long that we understand it pretty well and that makes it hard to adopt a new technology that we don't really have any experience with. Any thoughts in this area would also be welcome.
Andreas Neufert
VP, Product Management
Posts: 7076
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Capacity Tier Architecture and Some Questions

Post by Andreas Neufert »

We had some really good performance results with the landing zone. If you upload older restore points to S3 or use Veeam Backup Copy Job, you should do it out of the landing zone to avoid dedup data rehydration. The Exagrid team can help to size the storage correctly and they know the bootlenecks to address those with the design.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider] and 19 guests