-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
EMC Data Domain best practices
Hi,
I would like to discuss the best setup for our situating regaring the use of multiple sites and a EMC Data Domain unit D640. We currently designed it as folow:
Primairy site:
VMware production cluster with 7x ESXi 5.1 on EMC FC VNX5300 storage. 7.8TB of used storage.
Backup: Proxy + Repository on fysical server with enough fast local storage for 14 restore points. For single file, application restore and instant reovery.
Secondairy site:
Copy 1: Proxy + Repository on fysical server with enough fast local storage for 7 restore points. For DR/Instant recovery if primairy site fails.
Copy 2: Data Domain via CIFS. This is for archiving purposes. Repository set to decompress and align data.
600mbit/s connection between sites. No WAN accalerator used as the bandwith is to big. The accelerator eliminates traffic, but increases copy times by 5x.
We have 5 jobs, all with particular guest settings. But all have the same backup stratigy:
Backup => to Pri site local (14 restore points, most jobs on Inremental, some due to size set to reversed incremental)
Copy 1 => to Sec site local (7 restore points)
Copy 2 => to Sec site dedupe (week,month,year schedule in place)
We have noticed the folowing "problems" with this design:
- Copy 2 job is copying from Pri site repository, hower it has set the source to only copy from Secondary site. Veeam support confirmed that however the source can be set, in this situation it isnt used. Not a real problem because we got enough bandwidth, but stil, it makes sense to copy the data only once to the remote site right?
- A real problem: Transforms on the datadomain takes forever. Is there a better way to archive to the Datadomain, other apprauch?
- We currently use CIFS, I could switch to NFS. Would this mean an increase of performance of the transforms?
- What is your Data Domain use, and do you have any thoughts about our setup? Any best pratices recommendations?
Thanks in advance, regards,
Bastiaan
I would like to discuss the best setup for our situating regaring the use of multiple sites and a EMC Data Domain unit D640. We currently designed it as folow:
Primairy site:
VMware production cluster with 7x ESXi 5.1 on EMC FC VNX5300 storage. 7.8TB of used storage.
Backup: Proxy + Repository on fysical server with enough fast local storage for 14 restore points. For single file, application restore and instant reovery.
Secondairy site:
Copy 1: Proxy + Repository on fysical server with enough fast local storage for 7 restore points. For DR/Instant recovery if primairy site fails.
Copy 2: Data Domain via CIFS. This is for archiving purposes. Repository set to decompress and align data.
600mbit/s connection between sites. No WAN accalerator used as the bandwith is to big. The accelerator eliminates traffic, but increases copy times by 5x.
We have 5 jobs, all with particular guest settings. But all have the same backup stratigy:
Backup => to Pri site local (14 restore points, most jobs on Inremental, some due to size set to reversed incremental)
Copy 1 => to Sec site local (7 restore points)
Copy 2 => to Sec site dedupe (week,month,year schedule in place)
We have noticed the folowing "problems" with this design:
- Copy 2 job is copying from Pri site repository, hower it has set the source to only copy from Secondary site. Veeam support confirmed that however the source can be set, in this situation it isnt used. Not a real problem because we got enough bandwidth, but stil, it makes sense to copy the data only once to the remote site right?
- A real problem: Transforms on the datadomain takes forever. Is there a better way to archive to the Datadomain, other apprauch?
- We currently use CIFS, I could switch to NFS. Would this mean an increase of performance of the transforms?
- What is your Data Domain use, and do you have any thoughts about our setup? Any best pratices recommendations?
Thanks in advance, regards,
Bastiaan
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: EMC Data Domain best practices
How do you specify the source for your backup copy jobs to DD (entire repository or individual backup jobs)? Note that currently backup copy job cannot be used as a source for another backup copy job (a possible workaround would be to use a "dummy" backup job, map it to the backup copy job files, and then be able to use this job as a source for the backup copy job).b.vanhaastrecht wrote:- Copy 2 job is copying from Pri site repository, hower it has set the source to only copy from Secondary site. Veeam support confirmed that however the source can be set, in this situation it isnt used. Not a real problem because we got enough bandwidth, but stil, it makes sense to copy the data only once to the remote site right?
DataDomain performance issues and different ways of deployment are discussed in this existing tread, please review.b.vanhaastrecht wrote:- A real problem: Transforms on the datadomain takes forever. Is there a better way to archive to the Datadomain, other apprauch?
- We currently use CIFS, I could switch to NFS. Would this mean an increase of performance of the transforms?
- What is your Data Domain use, and do you have any thoughts about our setup? Any best pratices recommendations?
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: EMC Data Domain best practices
Hi Bastiaan,
I'm not sure you can run a BackupCopy Job using a BackupCopy as your source. About other questions:
- trasform has a heavily random I/O pattern and in my opinion is not realy usable on dedup appliances. I would honestly go for a forward incremental, and let DD increases the free space using its own dedup capabilities. After all, it's an archival solution, not the primary restore location, so having a full backup for your last restore point is less needed than on your primary repository
- Never used a DD, but seems like with latest firmware releases CIFS and NFS are at the same speed now. In both cases, if you want to increase performances, it's better to place a linux or windows repository in from on the DD, so the Veeam repository process can run on that machine (near to the DD) and not in the veeam server. Also, in this way you can use that machine as a PowerNFS when trying Instant Recovery from the DD
Also, look around these forums for other threads, there are many DD users and they have shared their best practices here.
Luca.
I'm not sure you can run a BackupCopy Job using a BackupCopy as your source. About other questions:
- trasform has a heavily random I/O pattern and in my opinion is not realy usable on dedup appliances. I would honestly go for a forward incremental, and let DD increases the free space using its own dedup capabilities. After all, it's an archival solution, not the primary restore location, so having a full backup for your last restore point is less needed than on your primary repository
- Never used a DD, but seems like with latest firmware releases CIFS and NFS are at the same speed now. In both cases, if you want to increase performances, it's better to place a linux or windows repository in from on the DD, so the Veeam repository process can run on that machine (near to the DD) and not in the veeam server. Also, in this way you can use that machine as a PowerNFS when trying Instant Recovery from the DD
Also, look around these forums for other threads, there are many DD users and they have shared their best practices here.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: EMC Data Domain best practices
Luca, 100% agree here, but Bastiaan is talking about transform activity performed by the backup copy job on its target repository.dellock6 wrote:- trasform has a heavily random I/O pattern and in my opinion is not realy usable on dedup appliances. I would honestly go for a forward incremental, and let DD increases the free space using its own dedup capabilities. After all, it's an archival solution, not the primary restore location, so having a full backup for your last restore point is less needed than on your primary repository
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: EMC Data Domain best practices
oh, you're right, sorry I misread the post...
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: EMC Data Domain best practices
Foggy, so copy of copy wont work. I will look into the dummy backup job option, as for now we have enough bandwidth.
I have read the datadomain thread. I'm unable to find information about ussage of a copy job to a DD. Would switching to a linux server with NFS to the DD solve the transforming proces of the copyjob? Or, is using a copy job with transforming not a recommended way to store archives to a DD?
I have read the datadomain thread. I'm unable to find information about ussage of a copy job to a DD. Would switching to a linux server with NFS to the DD solve the transforming proces of the copyjob? Or, is using a copy job with transforming not a recommended way to store archives to a DD?
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: EMC Data Domain best practices
As mentioned by Luca, this will allow the transformation activity to be performed by the Veeam agent installed on the Linux server, not the Veeam B&R server itself. Should have some positive effect on performance.
-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: EMC Data Domain best practices
Ok, thanks.foggy wrote:As mentioned by Luca, this will allow the transformation activity to be performed by the Veeam agent installed on the Linux server, not the Veeam B&R server itself. Should have some positive effect on performance.
Curently a transform of 1TB of data was at 15% after 12 hours of running on the DD, connected via CIFS. I agree a NFS on linux would improve performance, but will it make the transform run within an acceptable timeframe? I highly doubt it will boost it with 800% .
Are other Data Domain users using the Copy and archive (week/monthly/yearly) / transform option? Is it the way to go?
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: EMC Data Domain best practices
There is a couple of existing threads on backup copy jobs and Datadomain target storage, please take a look > http://forums.veeam.com/search.php?keyw ... p+copy+job
-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: EMC Data Domain best practices
The last few days I've spend a lot of time in reading others setups and done some testing myself. We currently have a CentOS linux machine with 4 CPU's and 8GB of RAM setup as a Linux host in Veeam. We've mounted the DataDomain volume via NFS and set the performance tunings as recommended by EMC DataDomain intergration guide with Veeam v6. In the copy job we've notcied an increate of bandwitdth/thruput to the DataDomain. We can setup multiple streams, it keeps increaing the thruput. But, as soon as the transforms kick in, it's very very slow.Vitaliy S. wrote:There is a couple of existing threads on backup copy jobs and Datadomain target storage, please take a look > http://forums.veeam.com/search.php?keyw ... p+copy+job
With a single transform of 1TB with 200GB to commit, it runs for 8 hours.
If two transforms run at the same time, 1TB with 200GB to commit, and one of 1.2TB with 50GB to commit, it takes more than 36 hours!
If three or more runs at the same time, never ending story.
When lookinig at the DataDomain during the transport and transforms, it's fluctuation between 25% to 50% CPU. Doesnt seem te be stressed, certainly not at transforming.
The CentOS machine has almost 95% of memory dedicated to the VeeamAgent processes. CPU is only high at transport, at transforming its at 10% per core.
My concers are:
- What is my actual bothleneck of the transforms? CPU's of all components doens't seem to be stressed. Is there a problem to be fixed?
- There is not a lot of information in the forums regarding others setup. EMC's DataDomain guide recommomends not using synthetic fulls, which a transform is, which a copy job uses to make archives. They only zoom into backup job with periodic fulls is the way to go. In Veeams presentations I saw a lot of disk-to-dedupe setups for archive purpose. So, we are seeking for the correct way to use the DD for archiving purposes. Currently we're stuck.
- The concurreny of tasks is managed by the repository setting, but it doenst take what in account what type of task; transport or transform. I would like multiple instances of the transport, but only one transform at the time.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: EMC Data Domain best practices
Actually, with any deduplicating storage the bottleneck will be storage writing speed. Transform is very intensive random I/O operation, while dedupe storage is designed for archiving purposes mostly and not able to provide good IOPS.b.vanhaastrecht wrote:- What is my actual bothleneck of the transforms? CPU's of all components doens't seem to be stressed. Is there a problem to be fixed?
A possible workaround is to use some kind of staging storage to perform transform on and then copy backups to dedupe storage afterwards. In your case you already have "Sec site local" storage that gets backups created by the "Copy 1" backup copy job, so you could offload backups to DD from there rather than using another backup copy job directly to DD.b.vanhaastrecht wrote:- There is not a lot of information in the forums regarding others setup. EMC's DataDomain guide recommomends not using synthetic fulls, which a transform is, which a copy job uses to make archives. They only zoom into backup job with periodic fulls is the way to go. In Veeams presentations I saw a lot of disk-to-dedupe setups for archive purpose. So, we are seeking for the correct way to use the DD for archiving purposes. Currently we're stuck.
Sounds reasonable, thanks for the feedback.b.vanhaastrecht wrote:- The concurreny of tasks is managed by the repository setting, but it doenst take what in account what type of task; transport or transform. I would like multiple instances of the transport, but only one transform at the time.
-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
Re: EMC Data Domain best practices
Hi foggy, thanks for the reply. So, dedupe's units are ment for archiving, the copy jobs are ment for archiving, but the process of making the archives; transforming is IO intensive and therefore not ment for dedupe units. Looks like an impasse.
Using copy scripts after the first offsite copy is one posibility, but we would like an intergrated solution in Veeam. Using scripts lacks good monitoring and usability by others.
One thing I've noticed about our copy to local jobs they transform every day, while they have no archive schedule set. Is this normal? Our backup jobs initially were set to "reversed incremental", and later on changed to forward incremental. Does the copy job follow the reversed/forward incremental principle?
Using copy scripts after the first offsite copy is one posibility, but we would like an intergrated solution in Veeam. Using scripts lacks good monitoring and usability by others.
One thing I've noticed about our copy to local jobs they transform every day, while they have no archive schedule set. Is this normal? Our backup jobs initially were set to "reversed incremental", and later on changed to forward incremental. Does the copy job follow the reversed/forward incremental principle?
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: EMC Data Domain best practices
For backup copy job, it doesn't matter what source backup jobs are set to: forward or reversed incremental mode. Backup copy job synthetically creates restore points in remote location from the changed blocks extracted from the source storage and is always forward incremental.b.vanhaastrecht wrote:One thing I've noticed about our copy to local jobs they transform every day, while they have no archive schedule set. Is this normal? Our backup jobs initially were set to "reversed incremental", and later on changed to forward incremental. Does the copy job follow the reversed/forward incremental principle?
-
- Enthusiast
- Posts: 76
- Liked: 22 times
- Joined: Aug 27, 2013 3:44 pm
- Full Name: Jason Tupeck
- Contact:
Re: EMC Data Domain best practices
Hey all....I've been using a DataDomain DD670 unit with one ES30 expansion shelf with Veeam v7 for a little over a year now. I am really hoping that with v8 and DDBoost, the performance issues of the synthetic roll up will be mitigated (mostly, or at least SOME) but the setup we have is as follows:
Keep in mind I need 14 daily copies of data and 24 monthly copies, to comply with our backup policies.
1. Daily Backup Jobs (DBJ) landing on primary storage w/ 5 or 14 days retention depending on company policy and the ability to use a backup copy job to the DD670 infrastructure (more on this in a moment)
2. Backup Copy Job (BCJ), per DBJ, targeting the DataDomain with GFS retention set to 2/24/0. (weeklies are only necessary for the job structure itself, not for company policy, or I would dump them)
What I found was that we really suffered performance on BCJs that were tied to DBJs containing VMs with high change rates once retention hit and the synthetic full was generated. Then performance became increasingly terrible, ever night. So, for these particular servers/jobs, I set up a monthly full backup with retention set to 24 and wrote them direct to the DataDomain infrastructure. The linear process is MUCH faster than the BCJ synthetic full rollup and we just run these jobs on the final weekend of the month. 24 retention points keeps us within the stated data retention policies, and the VMs in these jobs are held for 14 days on primary storage, rather than 5, in order to meet our company policy. This doesn't eat up TOO much more storage, because change rate is still sub 10% (average) and Veeam does a fairly good job of deduping data as it is. DBJs that have a low change rate and/or very few servers per job are in BCJs that write to the DataDomain every night and while performance is not GREAT, its acceptable...with each BCJ almost always being completed before 8am the next morning. I have one exception BCJ which includes two DBJ that make up 3.9TB of data...I am keeping this around while I wait for Veeam v8 and DDBoost integration, because I want to see how DDBoost impacts the job. If performance still isn't great, I will break these up into two monthly jobs, just like the others.
Hope this helps.
Keep in mind I need 14 daily copies of data and 24 monthly copies, to comply with our backup policies.
1. Daily Backup Jobs (DBJ) landing on primary storage w/ 5 or 14 days retention depending on company policy and the ability to use a backup copy job to the DD670 infrastructure (more on this in a moment)
2. Backup Copy Job (BCJ), per DBJ, targeting the DataDomain with GFS retention set to 2/24/0. (weeklies are only necessary for the job structure itself, not for company policy, or I would dump them)
What I found was that we really suffered performance on BCJs that were tied to DBJs containing VMs with high change rates once retention hit and the synthetic full was generated. Then performance became increasingly terrible, ever night. So, for these particular servers/jobs, I set up a monthly full backup with retention set to 24 and wrote them direct to the DataDomain infrastructure. The linear process is MUCH faster than the BCJ synthetic full rollup and we just run these jobs on the final weekend of the month. 24 retention points keeps us within the stated data retention policies, and the VMs in these jobs are held for 14 days on primary storage, rather than 5, in order to meet our company policy. This doesn't eat up TOO much more storage, because change rate is still sub 10% (average) and Veeam does a fairly good job of deduping data as it is. DBJs that have a low change rate and/or very few servers per job are in BCJs that write to the DataDomain every night and while performance is not GREAT, its acceptable...with each BCJ almost always being completed before 8am the next morning. I have one exception BCJ which includes two DBJ that make up 3.9TB of data...I am keeping this around while I wait for Veeam v8 and DDBoost integration, because I want to see how DDBoost impacts the job. If performance still isn't great, I will break these up into two monthly jobs, just like the others.
Hope this helps.
-
- Enthusiast
- Posts: 91
- Liked: 10 times
- Joined: Aug 30, 2013 8:25 pm
- Contact:
Re: EMC Data Domain best practices
Won't know how V8 and DD work until it's released, but as of this moment, Backup copy jobs don't work with any dedupe appliances (except exagrid which has a staging area) due to the transformation required from backup copy jobs. Kinda misleading information throughout the veeam website.
Who is online
Users browsing this forum: No registered users and 79 guests