-
- Novice
- Posts: 8
- Liked: never
- Joined: Oct 19, 2009 2:43 pm
- Contact:
V4.0 and Data Domain
Hello
I've just upgraded to version 4 and initial impressions were great - our full backups (being sent to CIFS share on DataDomain dd510) were much quicker than version 3.1.1 and initially incrementals were also fast (at least the ones done immediately after a full - therefore hardly any changes). However, as time has gone on our incrementals have been getting slower and slower - each day the backup time is slower than the last, to the point where we were getting 2mbs on jobs. Switching back to doing a nightly full has got us back up to where we were originally, so there seems to be some problem with incrementals.
We've logged a call with support, who say 'This is a datadomain issue'. This can't be right can it? I'm told it has something to do with how datadomain 'treats' the data but if that is right how is a full as fast as it is? And why is an incremental slower when there is less data being sent to the datadomain? I just can't see why this is the issue...I had thought support for CIFS on datadomain was much improved in 4.0 but if I can;t use it, does anyone have any idea how I can still use these very expensive boxes as backup targets (I know we can mount them as NFS datastores on the ESX but performance for me on this method was always poor on 3.1.1).
I've just upgraded to version 4 and initial impressions were great - our full backups (being sent to CIFS share on DataDomain dd510) were much quicker than version 3.1.1 and initially incrementals were also fast (at least the ones done immediately after a full - therefore hardly any changes). However, as time has gone on our incrementals have been getting slower and slower - each day the backup time is slower than the last, to the point where we were getting 2mbs on jobs. Switching back to doing a nightly full has got us back up to where we were originally, so there seems to be some problem with incrementals.
We've logged a call with support, who say 'This is a datadomain issue'. This can't be right can it? I'm told it has something to do with how datadomain 'treats' the data but if that is right how is a full as fast as it is? And why is an incremental slower when there is less data being sent to the datadomain? I just can't see why this is the issue...I had thought support for CIFS on datadomain was much improved in 4.0 but if I can;t use it, does anyone have any idea how I can still use these very expensive boxes as backup targets (I know we can mount them as NFS datastores on the ESX but performance for me on this method was always poor on 3.1.1).
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Hello. In version 4.0, we have implemented all feedback we received from DataDomain compatibility testing team around the way we write the data to backup files, which is why you are seeing improvements in full backups comparing to previous versions. We have also added ability to completely disable our own deduplication as some customers requested that.
However, it looks like synthetic backup we are using (which involves backup file reconstruction during each pass) does not suit DataDomain well. Most likely, DataDomain was not designed with this use case in mind (updating significant amount of content of large files sitting on DataDomain device). Apparently DataDomain deduplication and data processing algorithms are not prepared well for this scenario, and it looks like the more of original file is being rebuilt, the slower DataDomain data processing is for the corresponding file.
Our support is correct saying that this is DataDomain issue, because you will not see such decrease when backing up to a regular storage.
As for ideas how to use these devices, I would use DataDomain as long-term backup repository. For example, have a weekly script in Veeam Backup job which copies all files to the DataDomain device. This way, you will only need to keep minimal amount of data on your "primary" fast backup storage (just latest week). The rest of data will be sitting on DataDomain device (months/years of backups). Because difference for the same VBK file will be quite minor week to week, the deduplication ratios should be exceptional. It is hard to imagine better long-term backups repository than DataDomain, if you ask me.
However, it looks like synthetic backup we are using (which involves backup file reconstruction during each pass) does not suit DataDomain well. Most likely, DataDomain was not designed with this use case in mind (updating significant amount of content of large files sitting on DataDomain device). Apparently DataDomain deduplication and data processing algorithms are not prepared well for this scenario, and it looks like the more of original file is being rebuilt, the slower DataDomain data processing is for the corresponding file.
Our support is correct saying that this is DataDomain issue, because you will not see such decrease when backing up to a regular storage.
As for ideas how to use these devices, I would use DataDomain as long-term backup repository. For example, have a weekly script in Veeam Backup job which copies all files to the DataDomain device. This way, you will only need to keep minimal amount of data on your "primary" fast backup storage (just latest week). The rest of data will be sitting on DataDomain device (months/years of backups). Because difference for the same VBK file will be quite minor week to week, the deduplication ratios should be exceptional. It is hard to imagine better long-term backups repository than DataDomain, if you ask me.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Oct 19, 2009 2:43 pm
- Contact:
Re: V4.0 and Data Domain
OK, I've also had a longer response from support which I think means that I understand the issue now a little better. Another benefit of datadomain is off site replication (haven't been able to test Veeam 4 replicas out yet), so I guess we'll have to copy our nightly backups to DD from a NAS, as we need them on this device to get them offsite (threw our tape drive away ages ago!). Thanks for the reply.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Oct 08, 2009 7:55 am
- Full Name: Morten Authen
- Contact:
Re: V4.0 and Data Domain
At VMworld 2009, we were PROMISED by Veeam sales-reps and tech-reps (David) that the issue with DataDomain and CIFS were fixed. Now we are sitting here with version 4.0 and the problem is still NOT fixed. Your technical support confirms this is true. What happened here?!?!
We have bought a product (Veeam Backup & Replication) that does not work as intended, you can’t seriously mean that we must run backup via NFS? It’s as slow as it can get. We have invested a lot of time and money in hardware and software which can't be used to its full extent since Veeam does not support CIFS with DataDomain...
* Why is it still not supported?
* What is Veeams official statement of this issue?
* Will it be fixed? When?
I was told to post this post here instead of a new thread, done, so please answer my questions...
We have bought a product (Veeam Backup & Replication) that does not work as intended, you can’t seriously mean that we must run backup via NFS? It’s as slow as it can get. We have invested a lot of time and money in hardware and software which can't be used to its full extent since Veeam does not support CIFS with DataDomain...
* Why is it still not supported?
* What is Veeams official statement of this issue?
* Will it be fixed? When?
I was told to post this post here instead of a new thread, done, so please answer my questions...
-
- Enthusiast
- Posts: 31
- Liked: never
- Joined: Feb 15, 2009 8:31 pm
- Contact:
Re: V4.0 and Data Domain
Hi we also use a Data Domain D530 device and are seeing issues with speed
I agree with you in saying that is a idea to run a weekly backup script to copy the VBK files to the CIFS share on something like openfiler
it would be idea if rather like Vranger (which works well with DD) you could set the VBK file to take (day/week.month/year) format as when the Data is processed inline on the DD device it will the Dedupe and create what seem like multiple full backups with their own date stamp.
The way Veeam works at the moment the VBk file always take the same name and the increments time stamped thus you would need to point multiple jobs at different directories so as not to overwrite the <server.vbk> file
Can this be something that can be looked at ??
I agree with you in saying that is a idea to run a weekly backup script to copy the VBK files to the CIFS share on something like openfiler
it would be idea if rather like Vranger (which works well with DD) you could set the VBK file to take (day/week.month/year) format as when the Data is processed inline on the DD device it will the Dedupe and create what seem like multiple full backups with their own date stamp.
The way Veeam works at the moment the VBk file always take the same name and the increments time stamped thus you would need to point multiple jobs at different directories so as not to overwrite the <server.vbk> file
Can this be something that can be looked at ??
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Morten, basically your questions are answered above. In version 4.0, we have implemented what DataDomain engineering requested us to do. The remaining issue sits in DataDomain data processing logic, which does not support synthetic backups well (details above). You will not see the performance issue when backing up to a regular storage.
Floss, yes - we will have traditional backup option (full + inc/diff) in version 4.5
Floss, yes - we will have traditional backup option (full + inc/diff) in version 4.5
-
- Enthusiast
- Posts: 31
- Liked: never
- Joined: Feb 15, 2009 8:31 pm
- Contact:
Re: V4.0 and Data Domain
Hi Can someone explain how the call function works for the Post activity command as i am trying to get .bat or .cmd file to run to copy the VBK files to a DD unit but this is not running but if I run the .bat manually it works fine and logs seem to indicate it is trying to run
-
- Novice
- Posts: 3
- Liked: never
- Joined: Jul 03, 2009 2:16 pm
- Full Name: Pavel Shterlyaev
- Contact:
Re: V4.0 and Data Domain
floss:
You can put the "C:\example.bat" in the post job window inside of "Advanced" properties of the job.
Probably you just forgot inverted commas "".
Also, it can be a permission problem. Please try to set the permissions "FULL" - "for everyone" on both folders: target and source. If it works, then your user under which the backup service is running doesn't have enough pliveleges to copy the files.
You can put the "C:\example.bat" in the post job window inside of "Advanced" properties of the job.
Probably you just forgot inverted commas "".
Also, it can be a permission problem. Please try to set the permissions "FULL" - "for everyone" on both folders: target and source. If it works, then your user under which the backup service is running doesn't have enough pliveleges to copy the files.
-
- Enthusiast
- Posts: 31
- Liked: never
- Joined: Feb 15, 2009 8:31 pm
- Contact:
Re: V4.0 and Data Domain
Hi yes i have put the .bat in inverted commas and i have also given this full permissions i also run proc explorer and it does not even start which makes me think there may be issues there are also no event log entrys pertaining to permissions
Has anyone running Veeam 4.0 ran post script successfully ???
Has anyone running Veeam 4.0 ran post script successfully ???
-
- Expert
- Posts: 106
- Liked: 11 times
- Joined: Jun 20, 2009 12:47 pm
- Contact:
Re: V4.0 and Data Domain
My Copy2ExternalStorage.bat works every night with Veeam 4.
-
- VP, Product Management
- Posts: 27343
- Liked: 2785 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: V4.0 and Data Domain
Floss,
I've just run a backup job with post backup script in my lab, and the script worked for me as it should be. Can you create a test script, that looks like this:
Then create a Temp folder if needed (don't forget to grant Everyone - FULL permissions to Temp folder, just to check that script works), choose this script in post-backup job box and re-run the job. Should work
I've just run a backup job with post backup script in my lab, and the script worked for me as it should be. Can you create a test script, that looks like this:
Code: Select all
Echo I_WANT_THIS_SCRIPT_TO_WORK C:\Temp\output.txt
-
- Novice
- Posts: 5
- Liked: never
- Joined: Dec 14, 2009 9:36 pm
- Full Name: Tim
- Contact:
Re: V4.0 and Data Domain
Just wanted to add my own displeasure to this issue.
We are a large Data Domain customer, utilize replication extensively, and were planning on becoming a larger Veeam customer. Our test cluster deployment of Veeam has now halted since we can only get pathetic backup rates to our large DD 690 units (around 5MB/s). Copying stuff "locally" and then copying to the data domain does not really work for us as it just adds additional time and space requirements - we need to be able to take advantage of CBT with vSphere, but also utilize our storage that was purchased specifically for backups and offsite replication.
Anything new with this since NOVEMEBER? It doesn't appear that 4.1 helped at all. Any thoughts on when / how this might be fixed?
This is a critical issue to us and if there is not at least a plan to resolve in a timely fashion, we will likely have to move away from the product to somehting else.
We are a large Data Domain customer, utilize replication extensively, and were planning on becoming a larger Veeam customer. Our test cluster deployment of Veeam has now halted since we can only get pathetic backup rates to our large DD 690 units (around 5MB/s). Copying stuff "locally" and then copying to the data domain does not really work for us as it just adds additional time and space requirements - we need to be able to take advantage of CBT with vSphere, but also utilize our storage that was purchased specifically for backups and offsite replication.
Anything new with this since NOVEMEBER? It doesn't appear that 4.1 helped at all. Any thoughts on when / how this might be fixed?
This is a critical issue to us and if there is not at least a plan to resolve in a timely fashion, we will likely have to move away from the product to somehting else.
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Have you tried connecting DataDomain to Linux/ESX so that writes go via NFS? Reportedly, this improves perfomance significantly.
To give you some history of this issue, when DataDomain compatibility testing team tested our solutions together (1 year ago), they have identified some issues with DataDomain CIFS implementation which only our product was able to uncover with syntetic backup over CIFS. The only thing DataDomain engineers suggested us, is to write to storage with larger blocks - and this was implemented fully in our 4.0 release, however as you can see the issue is still there.
Accidentally, just a few weeks ago, I have heard from our common partner that DataDomain is about to release the new beta firmware for testing, and that this fix resolves some performance issues, so I am currently waiting for feedback on that.
Anyhow, it is important to realize that the issue is NOT on our product’s side. Our product does not have performance issue writing to *any* other storage device in either CIFS or NFS integration modes, or directly. So obviously ideal and "proper" solution would be to see DataDomain fixing their bug.
From our side, we are not in position to commit any resolution dates for this issue, because the bug is with DataDomain firmware. Let's imagine that we somehow fixed the issue, but then there is always a possibility that their new firmware brakes something again - yes unlikely, but you get my idea. I just don't want to potentially lie or set false expectations about something reliant on the code we do not own.
What I can do however, is explain what we are working on in attempt to address this from our side. As I've already mentioned above, our next release will have an option for traditional backup (full+incrementals), in addition to regular synthetic. Assuming that the performance issue is caused by synthetic backup over CIFS (when the large backup file is being re-built during each backup), we are hoping that DataDomain will not have similar issue with "traditional" backup when files are written once and never modified afterwards.
Hope this information helps with your planning, I can certainly include you on early beta for this functionality, as right now we do not have anyone with DataDomain on the closed beta program.
To give you some history of this issue, when DataDomain compatibility testing team tested our solutions together (1 year ago), they have identified some issues with DataDomain CIFS implementation which only our product was able to uncover with syntetic backup over CIFS. The only thing DataDomain engineers suggested us, is to write to storage with larger blocks - and this was implemented fully in our 4.0 release, however as you can see the issue is still there.
Accidentally, just a few weeks ago, I have heard from our common partner that DataDomain is about to release the new beta firmware for testing, and that this fix resolves some performance issues, so I am currently waiting for feedback on that.
Anyhow, it is important to realize that the issue is NOT on our product’s side. Our product does not have performance issue writing to *any* other storage device in either CIFS or NFS integration modes, or directly. So obviously ideal and "proper" solution would be to see DataDomain fixing their bug.
From our side, we are not in position to commit any resolution dates for this issue, because the bug is with DataDomain firmware. Let's imagine that we somehow fixed the issue, but then there is always a possibility that their new firmware brakes something again - yes unlikely, but you get my idea. I just don't want to potentially lie or set false expectations about something reliant on the code we do not own.
What I can do however, is explain what we are working on in attempt to address this from our side. As I've already mentioned above, our next release will have an option for traditional backup (full+incrementals), in addition to regular synthetic. Assuming that the performance issue is caused by synthetic backup over CIFS (when the large backup file is being re-built during each backup), we are hoping that DataDomain will not have similar issue with "traditional" backup when files are written once and never modified afterwards.
Hope this information helps with your planning, I can certainly include you on early beta for this functionality, as right now we do not have anyone with DataDomain on the closed beta program.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Dec 14, 2009 9:36 pm
- Full Name: Tim
- Contact:
Re: V4.0 and Data Domain
I understand you position, however I don't understand the conclusion or your suggestions....
If I run Veeam and use a CIFS share directly to the Data Domain, the problem occurs
If I run Veeam and use a NFS share (MS client for NFS to map a drive) to the Data Domain, the problem occurs
If I run Veeam and have it go to an ESX host which then uses NFS to the Data Domain, the problem does NOT occur, but it uses up valuable resources on the host.
So why does it work fine writing THROUGH ESX using NFS, but not directly using NFS? What is being handled differently?
If I run Veeam and use a CIFS share directly to the Data Domain, the problem occurs
If I run Veeam and use a NFS share (MS client for NFS to map a drive) to the Data Domain, the problem occurs
If I run Veeam and have it go to an ESX host which then uses NFS to the Data Domain, the problem does NOT occur, but it uses up valuable resources on the host.
So why does it work fine writing THROUGH ESX using NFS, but not directly using NFS? What is being handled differently?
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
My suggestion is to use ESX host with NFS to the DataDomain as I know this is what other customers found to be working well, and this method was also recommended by DataDomain compatibility testing team. The ESX host will not be affected if you choose to use no deduplication and Low compression (this compression mode is specifically optimized to run in the ESX service console and uses very little CPU). You can change these settings in the Advanced job settings.
Alternatively, I suppose you could designate a Linux host and mount DataDomain to that host instead of ESX.
I am not sure why Windows NFS client does not help, because this was not part of DataDomain compatibility testing program. DataDomain only tested CIFS and NFS/ESX methods, and recommended to use NFS/ESX until CIFS issues are resolved.
Alternatively, I suppose you could designate a Linux host and mount DataDomain to that host instead of ESX.
I am not sure why Windows NFS client does not help, because this was not part of DataDomain compatibility testing program. DataDomain only tested CIFS and NFS/ESX methods, and recommended to use NFS/ESX until CIFS issues are resolved.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Dec 14, 2009 9:36 pm
- Full Name: Tim
- Contact:
Re: V4.0 and Data Domain
A few more comments and observations (and GOOD NEWS!):
1. Veeam on a 64 bit w2K8 box directly to our data domain 690 (CIFS or NFS) was getting about 6MB/s max. 16GB file, 45 minutes - initial full
2. Veeam on a 64 bit w2K8 box could backup the same test file to local storage at 62MB/s. same 16GB file, 4minutes 22 seconds - initial full
3. One of our engineers advised me this morning that he had a similar issue with a dfferent product and the Data Domain units when upgraded to the 4.7.x.x software. This was resolved by staying with 4.6.x.x. Our DD690 units are currently on 4.7.1.3
4. Data Domain has an "integration guide" published as on 12/18/2009 for Veeam 4.0 and the Data Domains. https://my.datadomain.com/custom-view/i ... 10&index=0 As a Data Domain partner I would expect Veeam to be aware of this and the information (fix) below. It was released on 12/18/2009
5. One piece of information in the above guide states the following:
Ensure that inline deduplication is disabled and that compression is set to None for each
configured job.
To get better CIFS backup and restore performance with DD OS versions earlier than
4.7.3, run the following commands on the Data Domain system:
cifs disable
cifs option set "dd rw balance enabled" no
cifs enable
Beginning with DD OS 4.7.3, this option is enabled by default.
Note: The Veeam synthetic full backup method may result in a smaller increase in the
Data Domain system’s compression factor over time compared to other VMware backup
applications.
6. After making this suggested change on our data domain 690, the same test file backed up with Veeam 4.1 directly to the DD690 did 31MB/s 8 minutes 26 seconds !!!! YEAH! big improvement, but still about half of the speed of going locally - and dedupe and compression were accidentally left at the defaults.... (anyone know how to change the default for all new jobs?)
7. 2nd test - direct to DD690, dedupe and compression disabled in the Veeam job - not quite as good. Will have to explore why, but still much better than 6MB/s - did 24MB/s and took 11:12 - but not as good as to local storage or even going to the DD690 through and ESX host (test below)
8. Test of same file backed up to data domain through an ESX host - 28MB/s - 9:55
9. The Data domain "application compatibility list DD OS 4.7" release on 12/18/2009 states that the min DD OS for suporting Veeam 4.0 is 4.7.3
10. We will test with other versions of data domain software (newer) and post results when available.
HOPE THIS HELPS EVERYONE (INCLUDING VEEAM) WITH THIS ISSUE!!
1. Veeam on a 64 bit w2K8 box directly to our data domain 690 (CIFS or NFS) was getting about 6MB/s max. 16GB file, 45 minutes - initial full
2. Veeam on a 64 bit w2K8 box could backup the same test file to local storage at 62MB/s. same 16GB file, 4minutes 22 seconds - initial full
3. One of our engineers advised me this morning that he had a similar issue with a dfferent product and the Data Domain units when upgraded to the 4.7.x.x software. This was resolved by staying with 4.6.x.x. Our DD690 units are currently on 4.7.1.3
4. Data Domain has an "integration guide" published as on 12/18/2009 for Veeam 4.0 and the Data Domains. https://my.datadomain.com/custom-view/i ... 10&index=0 As a Data Domain partner I would expect Veeam to be aware of this and the information (fix) below. It was released on 12/18/2009
5. One piece of information in the above guide states the following:
Ensure that inline deduplication is disabled and that compression is set to None for each
configured job.
To get better CIFS backup and restore performance with DD OS versions earlier than
4.7.3, run the following commands on the Data Domain system:
cifs disable
cifs option set "dd rw balance enabled" no
cifs enable
Beginning with DD OS 4.7.3, this option is enabled by default.
Note: The Veeam synthetic full backup method may result in a smaller increase in the
Data Domain system’s compression factor over time compared to other VMware backup
applications.
6. After making this suggested change on our data domain 690, the same test file backed up with Veeam 4.1 directly to the DD690 did 31MB/s 8 minutes 26 seconds !!!! YEAH! big improvement, but still about half of the speed of going locally - and dedupe and compression were accidentally left at the defaults.... (anyone know how to change the default for all new jobs?)
7. 2nd test - direct to DD690, dedupe and compression disabled in the Veeam job - not quite as good. Will have to explore why, but still much better than 6MB/s - did 24MB/s and took 11:12 - but not as good as to local storage or even going to the DD690 through and ESX host (test below)
8. Test of same file backed up to data domain through an ESX host - 28MB/s - 9:55
9. The Data domain "application compatibility list DD OS 4.7" release on 12/18/2009 states that the min DD OS for suporting Veeam 4.0 is 4.7.3
10. We will test with other versions of data domain software (newer) and post results when available.
HOPE THIS HELPS EVERYONE (INCLUDING VEEAM) WITH THIS ISSUE!!
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Thank you very much Tim for this useful information, we were not made aware about this KB article. I too heard that the new firmware addresses the performance issues, but I was under impression that it is still in beta.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Dec 14, 2009 9:36 pm
- Full Name: Tim
- Contact:
Re: V4.0 and Data Domain
4.7.3 is currently in "limited release" which is interesting since the compatibility list published on 12/18 references it.
Happy to help. Just want to get this solved for benefit of us all!
Happy to help. Just want to get this solved for benefit of us all!
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Tim, I am pretty confident that the second pass (with compression and dedupe disabled in Veeam) is slower solely because much less data needs to be written to the target storage. Many customers have reported this effect even with regular storage, which is generally much faster than DataDomain as it does not have to do any extra data processing. Reportedly, the target storage speed is almost always primary bottleneck in case of direct SAN backups (because of how fast Veeam Backup is able to retrieve and process the data) - so minimizing the amount of data that needs to be written to the target storage always helps with overall backup performance.Tim wrote: 6. After making this suggested change on our data domain 690, the same test file backed up with Veeam 4.1 directly to the DD690 did 31MB/s 8 minutes 26 seconds !!!! YEAH! big improvement, but still about half of the speed of going locally - and dedupe and compression were accidentally left at the defaults.... (anyone know how to change the default for all new jobs?)
7. 2nd test - direct to DD690, dedupe and compression disabled in the Veeam job - not quite as good. Will have to explore why, but still much better than 6MB/s - did 24MB/s and took 11:12 - but not as good as to local storage or even going to the DD690 through and ESX host (test below)
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
All,
Sorry to dig up an old thread, but I have a question on best practices. I am running Veeam 4.1 (w/ ESX 4) from within a VM, and backing up to a DD510. I am running 4.7.1 firmware on the DD510. It is my understanding that the best performance in this configuration is to backup in "Virtual Appliance" mode via NFS. Is that correct?
My incrementals with changed block tracking are taking much longer than I would think they should be. For example, I am getting 9MB/sec on a 20Gig VM (40 minutes) on an incremental, but I can reach about 43MB/sec on a full. Are incrementals just slower when hitting a Data Domain box?
I just put in the "fix" listed above, but since that is for CIFS, I am thinking it should not really have any effect on my NFS backup performace. At this point, I am wondering if I should just do fulls everyday. Do incrementals take longer because the old blocks have to be read in from the old .vbk file and written to the .vrb file, then new blocks written into the .vbk?
--Dan
Sorry to dig up an old thread, but I have a question on best practices. I am running Veeam 4.1 (w/ ESX 4) from within a VM, and backing up to a DD510. I am running 4.7.1 firmware on the DD510. It is my understanding that the best performance in this configuration is to backup in "Virtual Appliance" mode via NFS. Is that correct?
My incrementals with changed block tracking are taking much longer than I would think they should be. For example, I am getting 9MB/sec on a 20Gig VM (40 minutes) on an incremental, but I can reach about 43MB/sec on a full. Are incrementals just slower when hitting a Data Domain box?
I just put in the "fix" listed above, but since that is for CIFS, I am thinking it should not really have any effect on my NFS backup performace. At this point, I am wondering if I should just do fulls everyday. Do incrementals take longer because the old blocks have to be read in from the old .vbk file and written to the .vrb file, then new blocks written into the .vbk?
--Dan
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
I forgot to mention, of course de-dupe and compression are off in Veeam.
As it stands now, the Veeam incremental backups take at least 20-30% longer that _FULL_ backups took with vRanger.
--Dan
As it stands now, the Veeam incremental backups take at least 20-30% longer that _FULL_ backups took with vRanger.
--Dan
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
I would recommend upgrading to 4.7.3 or later first, before doing anything else. My understanding is that it features some fixes that better accomodate synthetic backup workload.
The incrementals take longer because they update synthetic full backup file, and DD engine has (or had, before 4.7.3) issues with such workload. I can also confirm that this issue is (was) specific to DD engine - I have tested a competitive dedupe NAS solution couple of weeks ago, and there were no similar (bad) issues with the incremental backup speed (50-60MB/s fulls, 200-300MB/s incremental to device). I have another competitive NAS testing coming in May, so it will be interesting to see how it compares.
But in all cases, we are introducing traditional backup option (full + incrementals) with our next release; so this should help to address this issue from our side.
The incrementals take longer because they update synthetic full backup file, and DD engine has (or had, before 4.7.3) issues with such workload. I can also confirm that this issue is (was) specific to DD engine - I have tested a competitive dedupe NAS solution couple of weeks ago, and there were no similar (bad) issues with the incremental backup speed (50-60MB/s fulls, 200-300MB/s incremental to device). I have another competitive NAS testing coming in May, so it will be interesting to see how it compares.
But in all cases, we are introducing traditional backup option (full + incrementals) with our next release; so this should help to address this issue from our side.
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
Gostev,
Thanks. I just updated to 4.7.4. I should know this weekend if that helps...
--Dan
Thanks. I just updated to 4.7.4. I should know this weekend if that helps...
--Dan
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
I just ran a test backup on a 20GB VM after upgrading to 4.7.4.
Initial full - 8 mins 35 sec.
Subsequent incremental (with 700mb worth of changed blocks judging by the VRB file) - 7 mins 30 sec.
This was in Virual Appliance mode with NFS. It doesn't look like there were any changes of significance in this new release .
--Dan
Initial full - 8 mins 35 sec.
Subsequent incremental (with 700mb worth of changed blocks judging by the VRB file) - 7 mins 30 sec.
This was in Virual Appliance mode with NFS. It doesn't look like there were any changes of significance in this new release .
--Dan
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
Also, I only have 2 vCPUs in the VM running Veeam, but since I am not using compression, and the CPUs are not maxed during backup, I assume going to 4vCPUs is pointless, right?
--Dan
--Dan
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: V4.0 and Data Domain
Correct, you don't need much CPU is compression disabled. Do you want to try CIFS integration mode instead to see how this compares?
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Apr 19, 2010 10:25 pm
- Full Name: Dan S
- Contact:
Re: V4.0 and Data Domain
Gostev,
Thanks. I did try CIFS, and it is a bit slower (it was a LOT slower before I ran the commands listed in this thread).
Anyhow, over the weekend, this is what I found:
A Full backup of all my VMs takes 4 hrs. 55 min.
An incremental backup of those same VMs takes almost 8 hours. - Yes, that's right incremental takes longer than a full.
It would sound like the best practices for backup to a DataDomain device is to just always run full backups for now. I look forward to the "incremental" backup feature that was mentioned earlier (in lieu of a synthetic full, as is now the case).
Also, it might be a good idea for Veeam to try and get DataDomain to address this issue from their side. I'm sure they are more likely to listen to you than listen to me.
Thanks,
--Dan
Thanks. I did try CIFS, and it is a bit slower (it was a LOT slower before I ran the commands listed in this thread).
Anyhow, over the weekend, this is what I found:
A Full backup of all my VMs takes 4 hrs. 55 min.
An incremental backup of those same VMs takes almost 8 hours. - Yes, that's right incremental takes longer than a full.
It would sound like the best practices for backup to a DataDomain device is to just always run full backups for now. I look forward to the "incremental" backup feature that was mentioned earlier (in lieu of a synthetic full, as is now the case).
Also, it might be a good idea for Veeam to try and get DataDomain to address this issue from their side. I'm sure they are more likely to listen to you than listen to me.
Thanks,
--Dan
Who is online
Users browsing this forum: Bing [Bot], denispirvulescu, hobbit, Semrush [Bot] and 138 guests