-
- Veeam Vanguard
- Posts: 636
- Liked: 154 times
- Joined: Aug 13, 2014 6:03 pm
- Full Name: Chris Childerhose
- Location: Toronto, ON
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Anyone with V9 installed - how is DD now in regards to backups, etc? Just want to get thoughts on the improvements implemented for DDBoost, etc. in v9.
-----------------------
Chris Childerhose
Veeam Vanguard / Veeam Legend / Veeam Ceritified Architect / VMCE
vExpert / VCAP-DCA / VCP8 / MCITP
Personal blog: https://just-virtualization.tech
Twitter: @cchilderhose
Chris Childerhose
Veeam Vanguard / Veeam Legend / Veeam Ceritified Architect / VMCE
vExpert / VCAP-DCA / VCP8 / MCITP
Personal blog: https://just-virtualization.tech
Twitter: @cchilderhose
-
- Influencer
- Posts: 16
- Liked: 4 times
- Joined: Dec 23, 2014 2:22 pm
- Contact:
[MERGED] Recommended Job Settings for EMC DataDomain in v9
Hi,
after upgrading to v9, I'm asking myself again what are the best / recommended Settings for backing up to EMC DataDomain. As introduced in v8 I missed this "your Job Settings are not recommended for the used target, do you want to fix it automaticlly?" Question, that shows up when you play around with some Job settings that wasn't optimal for your target in use.
So because this is missing, I want to ask by myself now, what are the optimal settings for our EMC DataDomain DD2500 (connected over FC)? I upgrade our DD already to OS: 5.6.0.5-501748, so this should be fine already.
Backup Repository:
Q1: Align backup file data blocks (On / Off)?
Q2: Decomporess backup data blocks before storing (On / Off)?
Q3: Use per-VM backup files (On / Off)?
Advanced Job Settings - Maintenance:
Q4: Is storage-level corruption guard the same as backup integrity checks? How often should it run? Will it cause much Random I/O at the target?
Advanced Job Settings - Storage:
Q5: Enable inline data deduplication (recommended) (On / Off)?
Q6: Exclude sawp file blocks (recommended) (On / Off)?
Q7: Exclude deleted file blocks (recommended) (On / Off)?
Q8: Is optimal still the "best" (good performance, average dedup) compression level?
Q9:What shall I take at the storage optimization drop-down list? Is it still Local target (16TB+...) ?
OK, that should be all
Thanks in advance
Regards
lasr
after upgrading to v9, I'm asking myself again what are the best / recommended Settings for backing up to EMC DataDomain. As introduced in v8 I missed this "your Job Settings are not recommended for the used target, do you want to fix it automaticlly?" Question, that shows up when you play around with some Job settings that wasn't optimal for your target in use.
So because this is missing, I want to ask by myself now, what are the optimal settings for our EMC DataDomain DD2500 (connected over FC)? I upgrade our DD already to OS: 5.6.0.5-501748, so this should be fine already.
Backup Repository:
Q1: Align backup file data blocks (On / Off)?
Q2: Decomporess backup data blocks before storing (On / Off)?
Q3: Use per-VM backup files (On / Off)?
Advanced Job Settings - Maintenance:
Q4: Is storage-level corruption guard the same as backup integrity checks? How often should it run? Will it cause much Random I/O at the target?
Advanced Job Settings - Storage:
Q5: Enable inline data deduplication (recommended) (On / Off)?
Q6: Exclude sawp file blocks (recommended) (On / Off)?
Q7: Exclude deleted file blocks (recommended) (On / Off)?
Q8: Is optimal still the "best" (good performance, average dedup) compression level?
Q9:What shall I take at the storage optimization drop-down list? Is it still Local target (16TB+...) ?
OK, that should be all
Thanks in advance
Regards
lasr
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommended Job Settings for EMC DataDomain in v9
Hi,
There are no changes from v8, see KB1956
Q1: Off
Q2: On
Q3: On
Q4: Not the same. Up to you. Yes.
Q5: Off
Q6: On
Q7: On
Q8: Yes
Q9: Yes
Thanks!
There are no changes from v8, see KB1956
Q1: Off
Q2: On
Q3: On
Q4: Not the same. Up to you. Yes.
Q5: Off
Q6: On
Q7: On
Q8: Yes
Q9: Yes
Thanks!
-
- Influencer
- Posts: 16
- Liked: 4 times
- Joined: Dec 23, 2014 2:22 pm
- Contact:
Re: Recommended Job Settings for EMC DataDomain in v9
Thank you very much, this answers all of my questions.
Regards
Regards
-
- Veeam Vanguard
- Posts: 636
- Liked: 154 times
- Joined: Aug 13, 2014 6:03 pm
- Full Name: Chris Childerhose
- Location: Toronto, ON
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Is there a difference between Optimal and Dedupe-Friendly for the Data Domain? I know in version 8 the Dedupe-Friendly option was recommended and that was the recommended setting for these appliances. Now Optimal is ok? What is the major difference if any.v.Eremin wrote:You should be ok with using Optimal level. Thanks.
I use Dedupe-Friendly but if Optimal works better to save disk space or something I may try it in a few jobs.
-----------------------
Chris Childerhose
Veeam Vanguard / Veeam Legend / Veeam Ceritified Architect / VMCE
vExpert / VCAP-DCA / VCP8 / MCITP
Personal blog: https://just-virtualization.tech
Twitter: @cchilderhose
Chris Childerhose
Veeam Vanguard / Veeam Legend / Veeam Ceritified Architect / VMCE
vExpert / VCAP-DCA / VCP8 / MCITP
Personal blog: https://just-virtualization.tech
Twitter: @cchilderhose
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Dedupe-Friendly was never recommended for Data Domain. In both v8 and v9, you want to use Optimal compression in the job settings, but also Decompress before storing checked in the advanced repository settings. Thanks!
-
- Veeam ProPartner
- Posts: 300
- Liked: 44 times
- Joined: Dec 03, 2015 3:41 pm
- Location: UK
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Are those settings correct for Backup Copy Jobs to Data Domains? And what are recommended Tier 1 Backup job settings, if Backup Copy Jobs to DD will later be used?
I read so many posts and articles on the subject, where the advice seemed to change over time (perhaps with DD boost and v8 etc). Although everything seems to work in our infrastructure, I'm still a little unclear on what would give the best performance in a Tier1 DAS/Tier2 DD setup, especially if Veeam Replication is to be implemented as well.
I read so many posts and articles on the subject, where the advice seemed to change over time (perhaps with DD boost and v8 etc). Although everything seems to work in our infrastructure, I'm still a little unclear on what would give the best performance in a Tier1 DAS/Tier2 DD setup, especially if Veeam Replication is to be implemented as well.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
The only thing you should keep in mind for the regular backup job if it has DD as a secondary destination, is block size (Q9 above), since backup copy job keeps that of the original job. Everything else can be respectively specified in both jobs depending on their own target.
-
- Lurker
- Posts: 2
- Liked: 1 time
- Joined: Jan 13, 2014 10:09 am
- Full Name: James Richards
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Is there any recommendations on the number of DDBoost units to use - i.e. is four boost units with a max concurrent task limit set better than one unit unrestricted?
I'm reviewing our configuration prior to an upgrade to v9.
I'm reviewing our configuration prior to an upgrade to v9.
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Dec 09, 2014 6:41 am
- Full Name: Lukas Zimmer
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Hey Gostev,
you say, that we should not use "Use per-VM backup files". Why we don´t should use this?
Can Veeam use more parallel streams to write on Data Domain when we use per-VM backup files?
you say, that we should not use "Use per-VM backup files". Why we don´t should use this?
Can Veeam use more parallel streams to write on Data Domain when we use per-VM backup files?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Where did I say this?lukeroom wrote:you say, that we should not use "Use per-VM backup files"
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Dec 09, 2014 6:41 am
- Full Name: Lukas Zimmer
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
I have read it wrong, sorry Gostev!
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
[MERGED] : Veeam 9 Best Prac with Data Domain dedupe devices
Hi all,
I am after BP including design and setup for Data domain dedupe devices for Veeam 9 (physical and not virtual)
I have read the version 8 ones but I am aware that ddboost had some updates and improvments made to it in veeam 9 and some changes in the old information may be required.
This is backing up a large VMware environment.
I am also interested to hear from folks using these devices in production for their feedback.
We are looking at fibre connectivity (larger environment) and we backup from Production 3par arrays (all ssd 8000 series).
We are currently using 3par arrays as backup targets (non ssd) but would prefer to send the data to a physical appliance away from the arrays to use the arrays for better DR and dev and test and other tasks rather than backup targets.(bit exspensive for that )
Cheers
Steve
I am after BP including design and setup for Data domain dedupe devices for Veeam 9 (physical and not virtual)
I have read the version 8 ones but I am aware that ddboost had some updates and improvments made to it in veeam 9 and some changes in the old information may be required.
This is backing up a large VMware environment.
I am also interested to hear from folks using these devices in production for their feedback.
We are looking at fibre connectivity (larger environment) and we backup from Production 3par arrays (all ssd 8000 series).
We are currently using 3par arrays as backup targets (non ssd) but would prefer to send the data to a physical appliance away from the arrays to use the arrays for better DR and dev and test and other tasks rather than backup targets.(bit exspensive for that )
Cheers
Steve
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Veeam 9 Best Prac with Data Domain dedupe devices
Hi Steven, here you can find the latest recommendations for Data Domain backup repository settings. And you can search this forum for other users feedback, there are plenty of existing threads regarding that.
-
- Veeam ProPartner
- Posts: 141
- Liked: 26 times
- Joined: Oct 12, 2015 2:55 pm
- Full Name: Dead-Data
- Location: UK
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Steve,
Below are what I believe to be the currently recommended settings for using Veeam v9 with EMC DataDomain DDBoost storage containers.
It is how we have ours configured anyways
Deployment size
Six DataDomains, Six sites, 4 vCenters, 60 hosts, 600 VMs
2000 VMDKs, 1 PB storage
10 Backup proxies using Network transport mode, mix of physical / virtual
Backup Repository:
Align backup file data blocks = Off
Decompress backup data blocks before storing = On
Use per-VM backup files = On
Advanced Job Settings - Storage:
Enable inline data deduplication = Off
Exclude swap file blocks = On
Exclude deleted file blocks = On
Compression level = Optimal
Storage Optimisation = Local target (16TB+)
We tested direct SAN connectivity 8GbFC from physical proxies, but found network transport 10GbE via vSphere hosts actually produced best overall job throughput. Your mileage may vary.
At largest sites we have multiple proxies and multiple DDBoost storage containers as backup repositories. DataDomain ingest rate more than matches Veeam extraction rate of data from primary storage.
Below are what I believe to be the currently recommended settings for using Veeam v9 with EMC DataDomain DDBoost storage containers.
It is how we have ours configured anyways
Deployment size
Six DataDomains, Six sites, 4 vCenters, 60 hosts, 600 VMs
2000 VMDKs, 1 PB storage
10 Backup proxies using Network transport mode, mix of physical / virtual
Backup Repository:
Align backup file data blocks = Off
Decompress backup data blocks before storing = On
Use per-VM backup files = On
Advanced Job Settings - Storage:
Enable inline data deduplication = Off
Exclude swap file blocks = On
Exclude deleted file blocks = On
Compression level = Optimal
Storage Optimisation = Local target (16TB+)
We tested direct SAN connectivity 8GbFC from physical proxies, but found network transport 10GbE via vSphere hosts actually produced best overall job throughput. Your mileage may vary.
At largest sites we have multiple proxies and multiple DDBoost storage containers as backup repositories. DataDomain ingest rate more than matches Veeam extraction rate of data from primary storage.
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
IS this the primary Target for your backups ?
As keep reading that DD devices should not be primary targets.
Was there much difference in fibre to network data transfer speeds ?
what switch gear are you using.
As keep reading that DD devices should not be primary targets.
Was there much difference in fibre to network data transfer speeds ?
what switch gear are you using.
-
- Veeam ProPartner
- Posts: 141
- Liked: 26 times
- Joined: Oct 12, 2015 2:55 pm
- Full Name: Dead-Data
- Location: UK
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Yes, DataDomains are sole backup target. Mounting of Backup sets with vNFS is working fine and responsive. This is Veeam B&R version 9 with DDBoost plug-in integration.
(My reading is that issues others encounter with DataDomains seem to relate to entry-level units, CIFS usage, small block sizes and earlier versions.)
Initially Physical proxies, Direct SAN, 8GbFC was tested as VMware Snaps would be held for less time, but due to time to mount the SAN Snaps and lower data extraction rates over FC overall times were longer.
Virtual proxies, 10GbE and Network transport mode was better able to provide a continuous stream of data to the DataDomains, in our environment.
Synthetic fulls are generated on the DataDomains and incrementals forever taken from vSphere hosts.
Switching is Cisco Nexus for 10GbE, 10GbFCoE and HP Brocade for 8GbFC.
(My reading is that issues others encounter with DataDomains seem to relate to entry-level units, CIFS usage, small block sizes and earlier versions.)
Initially Physical proxies, Direct SAN, 8GbFC was tested as VMware Snaps would be held for less time, but due to time to mount the SAN Snaps and lower data extraction rates over FC overall times were longer.
Virtual proxies, 10GbE and Network transport mode was better able to provide a continuous stream of data to the DataDomains, in our environment.
Synthetic fulls are generated on the DataDomains and incrementals forever taken from vSphere hosts.
Switching is Cisco Nexus for 10GbE, 10GbFCoE and HP Brocade for 8GbFC.
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Thanks for the helpful reply "DeadEye" much appreciated.
I have spoken to EMC/datadomain about this and said if we go down this path I will be asking for a box for at least a month to Test and evaluate.....will have to wait on their reply.
As we need two boxes I am sure this wont be an issue.
A lot of the reading on this appeared to me as bad setup and or understanding of both products , but there was still an element of "not sure"
In veeam 9 are you use the the "per vm chain option " to recover quicker from de dupe store/device ?
and how are your recoveries etc...time and speed wise.
I have spoken to EMC/datadomain about this and said if we go down this path I will be asking for a box for at least a month to Test and evaluate.....will have to wait on their reply.
As we need two boxes I am sure this wont be an issue.
A lot of the reading on this appeared to me as bad setup and or understanding of both products , but there was still an element of "not sure"
In veeam 9 are you use the the "per vm chain option " to recover quicker from de dupe store/device ?
and how are your recoveries etc...time and speed wise.
-
- Veeam ProPartner
- Posts: 141
- Liked: 26 times
- Joined: Oct 12, 2015 2:55 pm
- Full Name: Dead-Data
- Location: UK
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Hi,
Yes, we use per VM chains, I'd suggest always using this regardless of the repository type for reasons of manageability.
(The only case in which you can't use it is with a rotating removable hard disk setup.)
Very difficult to give useful metrics on throughput for both backup and recovery.
It depends very much on your environment and what you are measuring and where.
i.e. for our setup
Veeam Enterprise Manager reports throughput of 1TB/s for backups at one site.
Whilst the DataDomains report throughput of between 400MB/s and 600 MB/s using DDBoost
But that's because using CBT and DDBoost you are only moving unique still required changed blocks.
Similarly for restores, CBT will only rollback changed blocks for recovery of a VMDK or VM so can be pretty quick.
Yes, we use per VM chains, I'd suggest always using this regardless of the repository type for reasons of manageability.
(The only case in which you can't use it is with a rotating removable hard disk setup.)
Very difficult to give useful metrics on throughput for both backup and recovery.
It depends very much on your environment and what you are measuring and where.
i.e. for our setup
Veeam Enterprise Manager reports throughput of 1TB/s for backups at one site.
Whilst the DataDomains report throughput of between 400MB/s and 600 MB/s using DDBoost
But that's because using CBT and DDBoost you are only moving unique still required changed blocks.
Similarly for restores, CBT will only rollback changed blocks for recovery of a VMDK or VM so can be pretty quick.
-
- Expert
- Posts: 125
- Liked: 3 times
- Joined: Mar 23, 2009 4:44 pm
- Full Name: Matt
- Contact:
[MERGED] Data Domain Performance in v9
We're a mid-sized shop that has about 100 TB of Veeam backups. We currently have a D2D2T strategy, or production -> Veeam Disk storage -> LTO6 Tape. Tape is here whether I like it or not and I'll be stuck with it for the foreseeable future, so I'd prefer not to debate its merits.
We've traditionally used standard enterprise storage from IBM and have had little choice but to mix backups with functional VMware test machines. This is obviously not ideal from a controller perspective or segregation of backups either. That being said, I've been happy with the storage vendor and the near 0 amount of time I spend managing the storage units they sell us.
However, we're finally at the stage in our maturity where we can buy storage for the sole purpose of Veeam backup storage as an intermediary between production data and long term tape retention.
I've been looking into data domain for this function. I've read the papers, looked at the forums, and asked the vendor directly. For the research, it appears to have had it's share of issues with Veeam, from slow CIFS backup times (without converting to NFS), to rehydration problems for file level restores, etc.
Is there anybody in my same scenario who's used something other than data domain in the past that can speak to it's relative backup performance to other enterprise class storage, what issues still remain in the current product, and if rehydration to tape is going to be a killer?
Thank you for any help you can offer.
We've traditionally used standard enterprise storage from IBM and have had little choice but to mix backups with functional VMware test machines. This is obviously not ideal from a controller perspective or segregation of backups either. That being said, I've been happy with the storage vendor and the near 0 amount of time I spend managing the storage units they sell us.
However, we're finally at the stage in our maturity where we can buy storage for the sole purpose of Veeam backup storage as an intermediary between production data and long term tape retention.
I've been looking into data domain for this function. I've read the papers, looked at the forums, and asked the vendor directly. For the research, it appears to have had it's share of issues with Veeam, from slow CIFS backup times (without converting to NFS), to rehydration problems for file level restores, etc.
Is there anybody in my same scenario who's used something other than data domain in the past that can speak to it's relative backup performance to other enterprise class storage, what issues still remain in the current product, and if rehydration to tape is going to be a killer?
Thank you for any help you can offer.
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
We use both EMC DD's and Dell DR's. While our DD's are older (620's and 2200's) and 1G connected and our DR's newer (DR4100) and 10G connected, overall functionality is the same. As per the marketing hype, they ingest and dedupe extremely well, but at that are in effect a one-way street for storage. Data goes in quickly but does not come back out quickly - especially if you run more than one restore at a time. Try that next time you get the chance and see how much slower it runs. Also please be aware of the marketing skew of best case scenario numbers; do you run full backups of data that changes almost none at all every day? Then you might see their lofty claims. Weekend fulls w/ weekday incrementals w/ 1+TB of churn every day? You'll see the same realistic numbers as the rest of us and might end up further disappointed.
Despite marketing hype for deduplicating devices, tape is simply not yet dead. It still does its job extremely well, is safer and longer term than dedupe, and w/ LTO-6 and now 7 out, it's keeping up w/ ingest speeds pretty well. Take your dedupe budget and throw it at a JBOD SAN, then off-load those backups to tape. You'll get the same speed out as in, you'll know what to expect out of it, any restores from it will be fast, it's more mutli-use (need an emergency LUN for something else?), and Veeam has some pretty good dedupe and compression built-in.
Oh, and there's also the issue of Veeam's network to tape speed issue a lot of us are experiencing. If your source files are not stored directly on your tape server (again, JBOD SAN or NAS directly connected), and you're pulling your source files from a dedupe device (that's already re-hydrating slowly) across the network, you may not see much better than fast ethernet speeds. Investigate "shoe shining" in regards to tape backups. Hopefully Veeam is hard at work more closely investigating its network to tape data mover service. Our old 1G connected DD's backup to tape around 30MB/s or so to our LTO-6, our new 10G connected DR's will run around 115MB/s or so to tape despite being able to raw copy data out of them much faster than that.
Despite marketing hype for deduplicating devices, tape is simply not yet dead. It still does its job extremely well, is safer and longer term than dedupe, and w/ LTO-6 and now 7 out, it's keeping up w/ ingest speeds pretty well. Take your dedupe budget and throw it at a JBOD SAN, then off-load those backups to tape. You'll get the same speed out as in, you'll know what to expect out of it, any restores from it will be fast, it's more mutli-use (need an emergency LUN for something else?), and Veeam has some pretty good dedupe and compression built-in.
Oh, and there's also the issue of Veeam's network to tape speed issue a lot of us are experiencing. If your source files are not stored directly on your tape server (again, JBOD SAN or NAS directly connected), and you're pulling your source files from a dedupe device (that's already re-hydrating slowly) across the network, you may not see much better than fast ethernet speeds. Investigate "shoe shining" in regards to tape backups. Hopefully Veeam is hard at work more closely investigating its network to tape data mover service. Our old 1G connected DD's backup to tape around 30MB/s or so to our LTO-6, our new 10G connected DR's will run around 115MB/s or so to tape despite being able to raw copy data out of them much faster than that.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Expert
- Posts: 125
- Liked: 3 times
- Joined: Mar 23, 2009 4:44 pm
- Full Name: Matt
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Thanks for the great feedback rreed. Much appreciated.
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
We were in exactly the same boat, tape was a requirement and after much back and forward we ended up going with a Dell 3860f configured in RAID10 with big drives. They are only 7200RPM but the RAID10 keeps performance up for both read and write and while we could also use Windows 2012 dedup on top of it, we haven't yet. We will look at native Windows dedup when 2016 comes out but at the moment we have some files that 2012 would choke on. It honestly didn't end up being that much more expensive and to know we aren't going to have re-hydration issues when trying to put to tape was worth it.
We can easily max out our two LTO6 drives in the library from this storage while health checks are going on for other jobs and all our backup jobs bottlenecks are either network or source(depending on the site they are coming from)
The other consideration we had is we knew we had a lot of data that didn't dedup well so that changed the whole pricing situation to favor native disk
We can easily max out our two LTO6 drives in the library from this storage while health checks are going on for other jobs and all our backup jobs bottlenecks are either network or source(depending on the site they are coming from)
The other consideration we had is we knew we had a lot of data that didn't dedup well so that changed the whole pricing situation to favor native disk
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Hello. I realize this post is a few months old but it's very helpful. So thank you.
My question is about Active Fulls. My understanding is if we use synthetic fulls, all future SFs will be tied to the same files because the DD (w/ Boost btw) will start deduping all SFs against one another at the block level. Thus if something were to happen to the files or the filesystem against a block that is used by future SFs, then they could all be lost/corrupted?
Does this hold true if we do period active fulls? Are those files somehow unique and thus void of this risk? If they are also at risk of the same thing (meaning AFs are not safe and independent of one another because of DD's deduplication), is there any benefits to creating AFs when using with a DD w/boost?
Thanks
My question is about Active Fulls. My understanding is if we use synthetic fulls, all future SFs will be tied to the same files because the DD (w/ Boost btw) will start deduping all SFs against one another at the block level. Thus if something were to happen to the files or the filesystem against a block that is used by future SFs, then they could all be lost/corrupted?
Does this hold true if we do period active fulls? Are those files somehow unique and thus void of this risk? If they are also at risk of the same thing (meaning AFs are not safe and independent of one another because of DD's deduplication), is there any benefits to creating AFs when using with a DD w/boost?
Thanks
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Active full reads all the data from the source storage, so is not dependent on the previous backup chain. In case some of the blocks were corrupt, the newly written blocks will not be deduped against them, since will be different.
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Thanks foggy. I forgot to reply back saying that
In the meantime we have another question. Might warrant a new thread but figured here might be best stop first.
We're deploying dd in our cloud host as the copy job destination for some of our customers. There are two goals:
1. As small as possible – they pay per GB backed up and budget is tight.
1a. This means we want the DD to back it up as efficiently as possible
2. Within the window (compute resources aren’t the bottleneck, bandwidth is)
So as small as possible and fast as possible. Easy, right?
That being said from our veeam experience we have a few questions:
1. In the Backup OnPrem job there’s a compression setting and a storage optimization setting.
2. In the Copy Job there’s a compression setting.
3. If we set compression to NONE on the backup job and then extreme on the copy job (while turning on DeCompress on the DD repository), will that be the best for DD’s own compression and deduplication algorithms?
4. Or If we turn on compression for the OnPrem backup job and then also on for the copy job (which will get decompressed upon arrival like above), is that ideal? Our concern is if when it decompresses it upon arrival in the copy job does it decompress just to how it was when the onprem job finished or does it decompress it as if no compression was ever used in backup or copy jobs?
5. How does the Storage Optimization setting factor in? My understanding is this is some sort of chunk/block size setting. Should this be extreme or Local Target IF the top goal is highest compression and deduplication by the DD?
In the meantime we have another question. Might warrant a new thread but figured here might be best stop first.
We're deploying dd in our cloud host as the copy job destination for some of our customers. There are two goals:
1. As small as possible – they pay per GB backed up and budget is tight.
1a. This means we want the DD to back it up as efficiently as possible
2. Within the window (compute resources aren’t the bottleneck, bandwidth is)
So as small as possible and fast as possible. Easy, right?
That being said from our veeam experience we have a few questions:
1. In the Backup OnPrem job there’s a compression setting and a storage optimization setting.
2. In the Copy Job there’s a compression setting.
3. If we set compression to NONE on the backup job and then extreme on the copy job (while turning on DeCompress on the DD repository), will that be the best for DD’s own compression and deduplication algorithms?
4. Or If we turn on compression for the OnPrem backup job and then also on for the copy job (which will get decompressed upon arrival like above), is that ideal? Our concern is if when it decompresses it upon arrival in the copy job does it decompress just to how it was when the onprem job finished or does it decompress it as if no compression was ever used in backup or copy jobs?
5. How does the Storage Optimization setting factor in? My understanding is this is some sort of chunk/block size setting. Should this be extreme or Local Target IF the top goal is highest compression and deduplication by the DD?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Compression settings in backup and backup copy jobs are independent, so you can set them as required (for example, optimal for the regular job and extreme for the backup copy, to minimize traffic). Set backup copy target repository to decompress. Set storage optimization to Local 16TB+. Basically, set everything as recommended for DD in the thread above. Don't forget that you need a gateway on your side to enable point-to-point transfer of compressed data and decompression on target, and consider using WAN acceleration.
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
thanks foggy.
so is there no advantage in terms of dedupe and compression efficiency by the DD if I send backups to it that we not compressed in the initial onsite backup job?
My understanding is that the decompress setting just undoes the compression done by the copy job, not that done by the onsite backup job also.
so is there no advantage in terms of dedupe and compression efficiency by the DD if I send backups to it that we not compressed in the initial onsite backup job?
My understanding is that the decompress setting just undoes the compression done by the copy job, not that done by the onsite backup job also.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Data is not compressed twice. If compression is enabled on the source job and backup copy job has different compression setting, it will decompress data and then compress it with its own level, prior the transfer. So source job compression settings do not affect how data is stored on backup copy target.
-
- Enthusiast
- Posts: 61
- Liked: 1 time
- Joined: Feb 04, 2016 12:58 pm
- Full Name: RNT-Guy
- Contact:
Re: Recommended Backup Job Settings for EMC Data Domain
Awesome. Thanks!
So, I guess to save time I should set the source job to be whatever compression we want (highest to use the least space on the local veeam repository drives), then match it in the copy job but make sure the decompress option is checked. This way the DD can compress it using it's own algorithms and dedupe better.
Will "Local target 16TB+" give it the best use of space on the local disks? or if I use something else that is better it will be worse for the DD's turn?
So, I guess to save time I should set the source job to be whatever compression we want (highest to use the least space on the local veeam repository drives), then match it in the copy job but make sure the decompress option is checked. This way the DD can compress it using it's own algorithms and dedupe better.
Will "Local target 16TB+" give it the best use of space on the local disks? or if I use something else that is better it will be worse for the DD's turn?
Who is online
Users browsing this forum: Amazon [Bot], Bing [Bot], Google [Bot], massimiliano.rizzi, NightBird and 130 guests