-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Sure, this is expected (and also covered in FAQ) - unlike NBD, legacy Network relies on our proprietary service console agent, so it is a better choice for ESX 3.5 performance-wise. However, our agent is not aware of changed block tracking, and does not support ESXi... so with ESX(i) 4.x, NBD is still better choice. Even though full backup is significantly slower with NBD, incremental is 10x faster due to leveraging changed block tracking (and with Veeam, full backup is performed only once anyway).
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v5 Backup Speed
But I think what he may be referring to is the same thing I'm referring to above. Users are saying that they are using "Network" mode in V4, and then "Network" mode in V5, but I believe that "Network" mode in V4 is still using an agent, while "Network" mode in V5 is NBD based. That would explain why users that are using "Network" mode are seeing significantly slower performance with V5 for their full backups.
Am I not remembering correctly? Did V4 also use NBD mode when a user selected "Network" backup?
Am I not remembering correctly? Did V4 also use NBD mode when a user selected "Network" backup?
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
I have opened a case on this issue, because i believe something is wrong.
Kent
Kent
-
- Novice
- Posts: 9
- Liked: never
- Joined: Feb 20, 2009 4:37 pm
- Contact:
Re: v5 Backup Speed
Thank you for your help, and here are my results. 1 or 2 of you mentioned that my test platform are not identical and you were right.
I'm backing up on Physical Server, and using NFS volume on my ESXs. Those test of course are full backup, not incremental. I have also try the virtual appliance mode, and the performance are the same.
So I have installed V5 and V4.1.2 on each of my platform and here are the results
1) Test Platform Dell 1950 - Dual CPU - Dual Core 2.6Ghz - Windows 2003 R2 32 bits
As soon as V5 is installed on any of my platform the backup speed is divided by 2
I'm backing up on Physical Server, and using NFS volume on my ESXs. Those test of course are full backup, not incremental. I have also try the virtual appliance mode, and the performance are the same.
So I have installed V5 and V4.1.2 on each of my platform and here are the results
1) Test Platform Dell 1950 - Dual CPU - Dual Core 2.6Ghz - Windows 2003 R2 32 bits
- Veaam 4.1.2 Mode - Network Backup Average Speed over 22 VMs = 12MB/s
Veaam 5 Mode - Network Backup Average Speed over 22 VMs = 5MB/s
- Veaam 4.1.2 - Mode: Network Backup - Average Speed over 22 VMs = 12MB/s
Veaam 5 - Mode: Network Backup- Average Speed over 22 VM = 4MB/s
As soon as V5 is installed on any of my platform the backup speed is divided by 2
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
I would say, this is much slower performance than expected in both cases. Even your best case scenario provides pretty much unacceptable performance. For example, here is another v5 speed report also using network mode, a few times faster performance. Have you checked the CPU load on Veeam Backup server while the full backup is running with Task Manager? What does it show?
When use say "Network backup" for v4, do you mean "vStorage API > Network" mode, or legacy network mode that uses service console agents (3rd radiobutton on the processing mode selection step)? Also, what ESX version exactly do you run?
BTW, you never posted your support case number, so I cannot check on how the research goes... I hope you had a chance to send the logs for both v4 and v5 processing the same VM to our support already?
When use say "Network backup" for v4, do you mean "vStorage API > Network" mode, or legacy network mode that uses service console agents (3rd radiobutton on the processing mode selection step)? Also, what ESX version exactly do you run?
BTW, you never posted your support case number, so I cannot check on how the research goes... I hope you had a chance to send the logs for both v4 and v5 processing the same VM to our support already?
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
My support case number is 53585. I just had to cancel a job that has been running since 10/23 started at 1:00 am. Stats to data are below.
Status None Start time 10/23/2010 1:00:07 AM Details
Total VMs 43 End time
Processed VMs 39 Duration
Successful VMs 30 Total size 2.64 TB
Failed VMs 0 Processed size 1.27 TB
VMs in progress 1 Processing rate 5 MB/s
Before, on 4.1.2, although less data, speed was almost 3 times as fast.
Session Details
Status Success Start time 10/13/2010 8:57:13 AM Details
Total VMs 34 End time 10/14/2010 8:52:09 AM
Processed VMs 34 Duration 23:54:56
Successful VMs 34 Total size 1.18 TB
Failed VMs 0 Processed size 1.18 TB
VMs in progress 0 Processing rate 14 MB/s
Due to the fact my vRanger installation is not working properly, I must have backups, so I am going to go back to version 4, until these issues can be resolved. i still think the upside for this software is excellent, and that is why I think we are going to go ahead and buy, but I might wait.
Status None Start time 10/23/2010 1:00:07 AM Details
Total VMs 43 End time
Processed VMs 39 Duration
Successful VMs 30 Total size 2.64 TB
Failed VMs 0 Processed size 1.27 TB
VMs in progress 1 Processing rate 5 MB/s
Before, on 4.1.2, although less data, speed was almost 3 times as fast.
Session Details
Status Success Start time 10/13/2010 8:57:13 AM Details
Total VMs 34 End time 10/14/2010 8:52:09 AM
Processed VMs 34 Duration 23:54:56
Successful VMs 34 Total size 1.18 TB
Failed VMs 0 Processed size 1.18 TB
VMs in progress 0 Processing rate 14 MB/s
Due to the fact my vRanger installation is not working properly, I must have backups, so I am going to go back to version 4, until these issues can be resolved. i still think the upside for this software is excellent, and that is why I think we are going to go ahead and buy, but I might wait.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Hi, your support case number should have 6 digits, can you please double check? I just came back from weekly status update meeting, and devs are very interested to investigate this issue you are having.
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
Oops. I failed cut and paste school. ID#535851
-
- Novice
- Posts: 9
- Liked: never
- Joined: Feb 20, 2009 4:37 pm
- Contact:
Re: v5 Backup Speed
Seems I'm not the only one having the same performance issue
and I'm definitely using "vStorage API > Network", I've also try the legacy Service console mode, and the performance are exactly the same on Version 4.1.2
12MB on NDB and 12MB on Service Console
and I'm definitely using "vStorage API > Network", I've also try the legacy Service console mode, and the performance are exactly the same on Version 4.1.2
12MB on NDB and 12MB on Service Console
-
- Novice
- Posts: 9
- Liked: never
- Joined: Feb 20, 2009 4:37 pm
- Contact:
Re: v5 Backup Speed
I wanted to post a screenshot of ESXtop, unfortunately we cannot do this on this forum.
vmnic3 is the Virtual Machine group + Service console
vmnic2 is the VMkernel for the NFS network (isolated on a physical switch)
ESXtop reports
vmnic3 => PKTTX/s = 14549.12 | MbTX/s = 167.48 | PKTRX/s = 2715.88 | MbRX/s = 1.29
vmnic2 => PKTTX/s = 2800.52 | MbTX/s =1.76 | PKTRX/s = 14864.69 | MbRX/s = 169.36
I have also a line
USED-BY: vmk0 TEAM-PNIC: vmnic2 I do not know what to think about this one !
vmnic3 and vmnic2 are 1GB / FullDuplex
vmnic3 is the Virtual Machine group + Service console
vmnic2 is the VMkernel for the NFS network (isolated on a physical switch)
ESXtop reports
vmnic3 => PKTTX/s = 14549.12 | MbTX/s = 167.48 | PKTRX/s = 2715.88 | MbRX/s = 1.29
vmnic2 => PKTTX/s = 2800.52 | MbTX/s =1.76 | PKTRX/s = 14864.69 | MbRX/s = 169.36
I have also a line
USED-BY: vmk0 TEAM-PNIC: vmnic2 I do not know what to think about this one !
vmnic3 and vmnic2 are 1GB / FullDuplex
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
No one on your end seems "very interested" in this case at all. I have had zero contact on this case in 24 hours.Gostev wrote:Hi, your support case number should have 6 digits, can you please double check? I just came back from weekly status update meeting, and devs are very interested to investigate this issue you are having.
kent
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Hi Kent, the problem is that we have just put the new major release out a few days ago, and as expected our support is overloaded at the moment handling upgrades of thousands of existing customers. So, all support cases are processed according to severity level. I am sure you will be contacted soon. Thank you for your patience!
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Actually, it appears that I already had log investigation results sitting in my inbox when I wrote the above... I am totally swamped with email, just got there now
Anyway, according to the log files developers had identified the following differences between your v4 and v5 testing. I am sure you will get more information from support, but since there are other people subscribed to this thread who asked to update on our findings, I am also posting this here:
1. Different backup targets are used (while backup target is often top reason for performance issues).
2. Different backup modes are used (reversed incremental in v4, regular incremental in v5).
3. Data write speed with v5 target share fluctuates significantly with time (usually, this is a sign that the share is already busy with something).
4. Data read speed from v5 target share looks to be extremely slow, so backup storage opening operation takes extremely long time, thus affecting the overall job performance counter. And because v5 incremental backup mode requires more backup storage files open comparing to v4 reversed incremetal, these times really adds up... essentially, from 50 min of total job run time, 36 min (!) were spent just on opening various backup storage.
Based on (4), you might think the issue is because v5 incremental backup mode design makes it so much slower... but this is definitely not expected, and not in line with what other customers are reporting after switching to incremental backup mode, in fact I just got report today from one customer saying he sees 3 times speed improvement comparing to v4 reversed incremental backup mode. Here is the quote and actual numbers:
Anyway, according to the log files developers had identified the following differences between your v4 and v5 testing. I am sure you will get more information from support, but since there are other people subscribed to this thread who asked to update on our findings, I am also posting this here:
1. Different backup targets are used (while backup target is often top reason for performance issues).
2. Different backup modes are used (reversed incremental in v4, regular incremental in v5).
3. Data write speed with v5 target share fluctuates significantly with time (usually, this is a sign that the share is already busy with something).
4. Data read speed from v5 target share looks to be extremely slow, so backup storage opening operation takes extremely long time, thus affecting the overall job performance counter. And because v5 incremental backup mode requires more backup storage files open comparing to v4 reversed incremetal, these times really adds up... essentially, from 50 min of total job run time, 36 min (!) were spent just on opening various backup storage.
Based on (4), you might think the issue is because v5 incremental backup mode design makes it so much slower... but this is definitely not expected, and not in line with what other customers are reporting after switching to incremental backup mode, in fact I just got report today from one customer saying he sees 3 times speed improvement comparing to v4 reversed incremental backup mode. Here is the quote and actual numbers:
OK, so I've sent you a few bugs since the release, and I'm sure your working on many others, and I've probably got a few more to send after I complete some more testing, but I did want to give you guys some props for this release.
Now that I've had a few days running the full production backups with Veeam 5, all I can say is "It's Great!!". The move to forward incrementals has been a huge performance boost for our nightly backups, especially the big servers with a high change rates like our Exchange server. With 4.1 our Exchange server was taking 2.5-4 hours every night all by itself, with speeds around 30MB/sec. Now, with the switch to forward incrementals our Exchange backups are finishing in about 1 hour, with an average performance >110MB/sec. Now the entire backup job, which used to start at 6PM and run until after midnight not finishes before 10PM every night, in some cases before 9PM. Not only that, but with the smaller block sizes nightly backups are now 75-100GB instead of 150-175GB.
I understand that these results were somewhat expected based on the beta's, but it's nice to see the actual results in our productions backups of almost 8TB's of VM's.
Thanks to you and your team for all of your hard work, now to start sending you all my suggestions for the next release!!!
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
Gostev,
Actually, I have received NONE of this from support. It is pretty bad when I receive the results of my case in a forum, but I digress.
1. The backup target is identical, just a different subdirectory on the same DDR. You may see ip address rather than name in the UNC path.
2. That is correct. I was unaware that the default option for backup creation in v5 was actually different than v4. When I run in reverse incremental it is better, but still not the same as version 4. I am doing more testing on this over the next several days.
3. Fluctuations are normal, no issue here.
4. No idea why read speeds are slow. This would explain why instant restore is unusable for me. I have several other backup applications, TSM, Commvault, that are not afflicted with this issue. TSM literally can open and close, write and read in tenths of a second. Apparently, Veeam needs to get a hold of a Data Domain DDR and complete some testing.
I have a case open with Data Domain as well. An excerpt from the case notes I find interesting is below.
Veeam software creates backup files a bit differently than most other backup software vendors. It sends small blocks of data to the target, as opposed large blocks, as most vendors use.
Here is what we found when working directly with Veeam support:
This is an email that we received when we worked with them on performance issues:
Data Domain devices have a known issue when reading/writing small blocks of data from a storage using CIFS share.
1. Backup index rebuild taking a lot of time. This operation is reading and writing small chunks of data describing the content of the backup.
2. Performance degradation when writing actual backup data to the storage. Actually data write speed remains nearly the same but due to the way we calculate speed of backup for user it looks like a degradation.
Kent
Actually, I have received NONE of this from support. It is pretty bad when I receive the results of my case in a forum, but I digress.
1. The backup target is identical, just a different subdirectory on the same DDR. You may see ip address rather than name in the UNC path.
2. That is correct. I was unaware that the default option for backup creation in v5 was actually different than v4. When I run in reverse incremental it is better, but still not the same as version 4. I am doing more testing on this over the next several days.
3. Fluctuations are normal, no issue here.
4. No idea why read speeds are slow. This would explain why instant restore is unusable for me. I have several other backup applications, TSM, Commvault, that are not afflicted with this issue. TSM literally can open and close, write and read in tenths of a second. Apparently, Veeam needs to get a hold of a Data Domain DDR and complete some testing.
I have a case open with Data Domain as well. An excerpt from the case notes I find interesting is below.
Veeam software creates backup files a bit differently than most other backup software vendors. It sends small blocks of data to the target, as opposed large blocks, as most vendors use.
Here is what we found when working directly with Veeam support:
This is an email that we received when we worked with them on performance issues:
Data Domain devices have a known issue when reading/writing small blocks of data from a storage using CIFS share.
1. Backup index rebuild taking a lot of time. This operation is reading and writing small chunks of data describing the content of the backup.
2. Performance degradation when writing actual backup data to the storage. Actually data write speed remains nearly the same but due to the way we calculate speed of backup for user it looks like a degradation.
Kent
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Correct, we worked with them to do joint testing almost 2 years ago (when we had v3, and even before they got acquired by EMC). Based on the joint testing results, we have rearchitected our backup storage design in v4 particularly to enable writing data in large blocks. I am surprised that DataDomain support is still using information from almost 2 years ago that is specific to v3 release, when we have v5
Anyway, now it is clear that your performance problems are due to using very special type of backup storage which clearly has some pretty bad performance issues reading the data back from the storage, which in turn affects Veeam Backup performance. Giving that all new vPower-based functionality requires good read performance from backup storage, I would recommend that you back up to disk instead, while only using DDR as a long-term repository by copying backup files there.
Anyway, now it is clear that your performance problems are due to using very special type of backup storage which clearly has some pretty bad performance issues reading the data back from the storage, which in turn affects Veeam Backup performance. Giving that all new vPower-based functionality requires good read performance from backup storage, I would recommend that you back up to disk instead, while only using DDR as a long-term repository by copying backup files there.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v5 Backup Speed
No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. Veeam's file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there's some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.itherrkr wrote: Veeam software creates backup files a bit differently than most other backup software vendors. It sends small blocks of data to the target, as opposed large blocks, as most vendors use.
Here is what we found when working directly with Veeam support:
This is an email that we received when we worked with them on performance issues:
Data Domain devices have a known issue when reading/writing small blocks of data from a storage using CIFS share.
1. Backup index rebuild taking a lot of time. This operation is reading and writing small chunks of data describing the content of the backup.
2. Performance degradation when writing actual backup data to the storage. Actually data write speed remains nearly the same but due to the way we calculate speed of backup for user it looks like a degradation.
I guess part of your issue is that your attempting to use a product that does compression and dedupe, on top of a product that does compression and dedupe. Veeam assumes that it's the "dedupe" engine and expects the storage to provide enough IOPS to give decent performance for all of it's features. A dedupe appliance is built for dedupe/archival, not active IO performance.
Have you tried turning of compression and in-line dedupe and just letting your DataDomain handle that part. This still won't quite give you a 100% sequential dump of you VMDK's, but it will be much closer. I really don't know if it would be that much better, but it might be.
The best option is probably to just backup to traditional storage and the copy/move the files to the dedupe appliance. That way the access will be sequential.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Disabling dedupe does not really change our backup storage and how we interact with it, so I do not believe this will help.tsightler wrote:Have you tried turning of compression and in-line dedupe and just letting your DataDomain handle that part. This still won't quite give you a 100% sequential dump of you VMDK's, but it will be much closer. I really don't know if it would be that much better, but it might be.
Yes, exactly what I suggested. At least this is how I personally would do this.tsightler wrote:The best option is probably to just backup to traditional storage and the copy/move the files to the dedupe appliance. That way the access will be sequential.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v5 Backup Speed
I agree this won't change much, but I'm pretty sure I noticed significantly less "randomness" when I disabled it just playing, although that's been a while ago. I assumed that marking a block as a duplicate has a higher "random access" pattern than if every block is new, but I didn't analyze it or anything, just watched the I/O pattern. I think I even had a scatter plot at one time, although maybe that was version 3.Gostev wrote: Disabling dedupe does not really change our backup storage and how we interact with it, so I do not believe this will help
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
I have never let Veeam do the dedupe. Data Domain always recommends letting it do the dedupe. Sending already compressed data to a DDR is a killer on dedupe rates.No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. Veeam's file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there's some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.
I guess part of your issue is that your attempting to use a product that does compression and dedupe, on top of a product that does compression and dedupe. Veeam assumes that it's the "dedupe" engine and expects the storage to provide enough IOPS to give decent performance for all of it's features. A dedupe appliance is built for dedupe/archival, not active IO performance.
Have you tried turning of compression and in-line dedupe and just letting your DataDomain handle that part. This still won't quite give you a 100% sequential dump of you VMDK's, but it will be much closer. I really don't know if it would be that much better, but it might be.
The best option is probably to just backup to traditional storage and the copy/move the files to the dedupe appliance. That way the access will be sequential.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
Well, this comes down to who you choose to listen. DataDomain only cares about dedupe rates, not paying attention to everything else - so of course they will recommend you that. We, as a backup vendor, mostly care about backup perfromance and shortest backup windows, so we recommend leaving our dedupe enabled. In fact, we did pass DataDomain requirements test for how well the backup files we provide still dedupe (because v3 did not even have an option to disable dedupe, and that is when the testing took place). This is because our dedupe is done on fairly large block level, and it does not really mess up advanced dedupe that much. At the same time, because now we are talking about writing 5x to 10x less data to very slow target, backup window really shrinks significantly. And I am not specifically talking about DataDomain here, right now we are testing v5 with couple of other major vendors as well, and we actually have an agreement there (from dedupe vendor side) that keeping our dedupe enabled provides more benefits when talking about using these devices as backup targets.
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Oct 21, 2010 1:47 pm
- Full Name: Kent Herr
- Contact:
Re: v5 Backup Speed
Well, I believe a test with Veeam dedupe is in order. Some very good points.
-
- Novice
- Posts: 7
- Liked: never
- Joined: Oct 21, 2010 4:13 pm
- Full Name: Kolbjørn Jensen
- Contact:
Re: v5 Backup Speed
How does Veeam write backupdata when dedup and compression is turned off? How far off is it from a sequential dump of the original vmdk file?Gostev wrote: Disabling dedupe does not really change our backup storage and how we interact with it, so I do not believe this will help.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v5 Backup Speed
For full backup, it is no different from when dedup and compression is turned on, just more data and blocks to write, this is it. Pretty sequential dump in both cases, with occasional metadata dumps into designated area of the file.
Not sure if you followed the thread, but the issue here is not really with how data is written by Veeam, but rather with random read I/O performance of DDR device.
Not sure if you followed the thread, but the issue here is not really with how data is written by Veeam, but rather with random read I/O performance of DDR device.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v5 Backup Speed
Running fulls in V4/V5 and forward incrementals on V5 are reasonably close to a stream whether compression/dedupe is turned on or off. I realized after posting some of this that my performance numbers were from Veeam backup 3. There are still some metadata flushes, and thus some random element, but it's pretty much a stream. Reversed incrementals are quite another story as they are very impacted by read latency.
I still think turning off compression and dedupe would likely improve performance for the DataDomain, based on how Veeam's compression works, but right now that's just a theory and I need to find time to analyze it at the block level and determine how it really works. I'm hoping to do that soon, but time hasn't been on my side lately.
I still think turning off compression and dedupe would likely improve performance for the DataDomain, based on how Veeam's compression works, but right now that's just a theory and I need to find time to analyze it at the block level and determine how it really works. I'm hoping to do that soon, but time hasn't been on my side lately.
Who is online
Users browsing this forum: No registered users and 208 guests