Comprehensive data protection for all workloads
Post Reply
StuartW_D
Influencer
Posts: 16
Liked: 1 time
Joined: Oct 21, 2016 7:41 am
Full Name: Stuart Wepener
Contact:

DataDomain Strategy

Post by StuartW_D »

Hi All,

I am sure this has been discussed before and I did make an attempt to search the forum first before opening a new topic, I got to page 10 out of 88, thought that was not a bad attempt. Basically I want to hear some feedback about Data Domains from other customers, particular in the deployment strategy of the DD's. We use them in our organization, and they are great, but I do question whether we are using them correctly or the best way that we can.

First off, I want to chat about using DD's are a first tier storage repository? Currently our backup jobs are going to the DataDomain, then do a backup copy job straight back to themselves for GFS. While I see this is a perfectly OK way to use them, performance for the Copy job and restores are not amazing. Should we want to improve our RTO's we should be looking at other storage to achieve this?
I saw a comment from Gostev somewhere about the three points of storage, Capacity, Performance and Cost. You have to pick 2 out of the three attributes for your repository, which means a DD should rather be used a second tier, retention based Storage rather than primary storage for higher RTO's?

The second point I have read somewhere is about the recommendation is not to use Storage Replication for the Data Domains, but rather use backup Copy jobs to the second unit? Is this really the case? I have been wondering about storage replication for sometime, are others doing Backup Copys over Storage Replication?

I realize these are more like statements, but I would appreciate opinions on the matter. Have others had the same setup and moved to a more efficient setup? What sort of improvements have they experienced?

Thanks in advance
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: DataDomain Strategy

Post by foggy »

Hi Stuart, I'm sure your peer customers will chime in with comments based on their real world deployments, but want to pay your attention at our reference architecture that explains why using dedupe storage as a primary target for your backups is against the best practices.

And this post should answer your second concern.

Thanks.
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: DataDomain Strategy

Post by DaveWatkins »

I've inherited a DD2500 from an acquisition and in my experience so far I'd never use it for primary backup storage.

I see any dedup device as a replacement/supplement for tape. Tape gets you your long term retention offsite and offline, dedup devices give you GFS (which could also be offsite) capabilities while still achieving acceptable restore times for old data without having to retrieve old tapes and restoring from tapes. They don't get you offline retention though which is an important distinction
StuartW_D
Influencer
Posts: 16
Liked: 1 time
Joined: Oct 21, 2016 7:41 am
Full Name: Stuart Wepener
Contact:

Re: DataDomain Strategy

Post by StuartW_D »

Thanks Foggy & Dave Watkins,

I actually found that second Forum Question before posting, but that blog post of yours is great, thanks.
ChrisSnell
Technology Partner
Posts: 126
Liked: 18 times
Joined: Feb 28, 2011 5:20 pm
Full Name: Chris Snell
Contact:

Re: DataDomain Strategy

Post by ChrisSnell »

Using systems gets around this issue, of course, as ExaGrid is architected with a straight disk area (called the landing zone). This allows customers to benefit from the best of both worlds, being able to be gain from dedupe and run copy jobs from the appliance.
DeadEyedJacks
Veeam ProPartner
Posts: 141
Liked: 26 times
Joined: Oct 12, 2015 2:55 pm
Full Name: Dead-Data
Location: UK
Contact:

Re: DataDomain Strategy

Post by DeadEyedJacks » 1 person likes this post

Whether Data Domains suit you is somewhat dependent on the scale of your environment.

We have 2PB of primary storage of which 500TB is file servers with multiple 10TB VMDKs.
Our Data Domains hold circa 8PB of restore points.

During a recent bulk restore exercise of 300 VMs the bottleneck wasn't the Data Domains nor the Veeam Backup infrastructure.
Rather it was the vCenter 52 NFC connections limit.

The Exagrid landing zone is a finite size...
ChrisSnell
Technology Partner
Posts: 126
Liked: 18 times
Joined: Feb 28, 2011 5:20 pm
Full Name: Chris Snell
Contact:

Re: DataDomain Strategy

Post by ChrisSnell »

DeadEyedJacks wrote:The Exagrid landing zone is a finite size...
I'm not sure what you mean with this? Everything is a finite size.

A single ExaGrid grid can currently scale to a 1PB size of landing zone. With SOBR support, that's a 1PB single repository. Ok, that's finite, but pretty big. Each appliance in that grid can ingest in 5 hours (if Veeam and the network can provide the data quick enough). 5 hour backup window for 1PB...
DeadEyedJacks
Veeam ProPartner
Posts: 141
Liked: 26 times
Joined: Oct 12, 2015 2:55 pm
Full Name: Dead-Data
Location: UK
Contact:

Re: DataDomain Strategy

Post by DeadEyedJacks »

Hi Chris,
That's interesting, as we've previously being told that the Exagrid landing zone was limited to 4TB per appliance.
The implication being that any backup file larger than 4TB would fail to ingest.

So probably something for a separate thread, but interested to know how Exagrid appliances would deal with a 50TB+ VM with 5x 10TB VMDKs.
Kev Parr
Novice
Posts: 7
Liked: 1 time
Joined: Nov 17, 2015 2:29 pm
Full Name: Kevin Parr
Contact:

Re: DataDomain Strategy

Post by Kev Parr »

Hi Stuart

One thing to note when utilizing DD in your environment is that DD is very poor at reads compared to other storage appliances. This is by design but it is excellent at writes and the compression rates we get are phenomenal. We have a DD990 and have had no end of issues regarding offsite copies of large VM's. The backup and restore function seem to operate within limits although the restore times for larger VM's can sometimes push SLA. We have a number of file servers that have several 2TB vmdk's with shares residing on them and restoring files from the larger ones can take over 30 mins to mount the backup.

Where we saw the biggest issues were around offsite copies. We have 2 data centres and have tried both MTREE and BCJ which both have their challenges with larger VM's. MTREE replication lag appeared during some enhancements in V8 of Veeam write streams. While this worked well for most storage units, DD with DDBoost really didn't like it and we had TB's of replication lag. This was mainly due the merge process not doing what it used to so when you got to the retention policy limit, the whole chain of files was set to be replicated. The workaround for this was to run periodical full backups to avoid the merge issue but that still has its challenges. The BCJ's take an extraordinarily long time to pick up a large VM full backup and copy it when using DD too. This is mainly due the re-hydration required and the way the files are stored on the primary repo but we managed to get it working for all but the huge VM's. Some of the BCJ's initially took 20+ days tpo do a full copy and occasionally, some of the jobs seem to reset which kicks of another full copy. I have one job now which has been running since last Thursday!

My advice would be to have an interim storage solution to handle the backup/restore type functions and use the DD for either archive or offsite copies. Veeam -> 'storagex' -> DD via BCJ would work well if your looking for a use for the DD.

Hope this helps.
wayne
Novice
Posts: 9
Liked: 2 times
Joined: May 18, 2011 5:10 pm
Full Name: Wayne Curnett
Contact:

Re: DataDomain Strategy

Post by wayne » 1 person likes this post

Big fan of the GFS in VEEAM. Not a fan of having to use backup copy jobs to gain access to the GFS capability. For those that have spent the money for a backup appliances, BCJ's kill you. If GFS was available in the primary backup all of these frustrations would go away. This isn't new information. It was brought as soon as GFS was created in VEEAM.
Post Reply

Who is online

Users browsing this forum: Link State and 262 guests