-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
EMC - poor GFS Backup to tape performance
Hi,
we have Veeam B&R 9.5 Update 1 installed on a phisycal server, with an EMC Data Domain DD2200 OS 5.6 as primary target, connected with 1GB eth.
We have configured a backup to tape job recently with GFS configuration. The library is a Dell Power Vault with LTO6 drive, connected directly to Veeam Server with FC 4GB.
Today we have run a test, and the processing rate is about 30 MB/s, the bottleneck is the source (about 49%).
Is it a correct data for you or can we improve the performance in some way?
What can we verify/optimize to get this goal?
Thank you
Marco
we have Veeam B&R 9.5 Update 1 installed on a phisycal server, with an EMC Data Domain DD2200 OS 5.6 as primary target, connected with 1GB eth.
We have configured a backup to tape job recently with GFS configuration. The library is a Dell Power Vault with LTO6 drive, connected directly to Veeam Server with FC 4GB.
Today we have run a test, and the processing rate is about 30 MB/s, the bottleneck is the source (about 49%).
Is it a correct data for you or can we improve the performance in some way?
What can we verify/optimize to get this goal?
Thank you
Marco
-
- Product Manager
- Posts: 14773
- Liked: 1719 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi Marco,
So, tape server component is installed on top of Veeam B&R server, is that correct? How Data Domain gateway is configured?
So, tape server component is installed on top of Veeam B&R server, is that correct? How Data Domain gateway is configured?
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi Dima,
it's correct, Veeam Server holds all roles.
The Data Domain is on the same network of the Veeam server, same gateway.
it's correct, Veeam Server holds all roles.
The Data Domain is on the same network of the Veeam server, same gateway.
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi,
we have upgraded Data Domain OS to 6.0, this is the EMC compatibility guide http://compatibilityguide.emc.com:8080/ ... idePage.do (the Veeam server is Windows 2008 R2 x64).
Now the backup to tape performance are worsened, 1 MB/s!
Does anyone have the same problem?
Thank you.
Marco
we have upgraded Data Domain OS to 6.0, this is the EMC compatibility guide http://compatibilityguide.emc.com:8080/ ... idePage.do (the Veeam server is Windows 2008 R2 x64).
Now the backup to tape performance are worsened, 1 MB/s!
Does anyone have the same problem?
Thank you.
Marco
-
- Product Manager
- Posts: 14773
- Liked: 1719 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: EMC - poor GFS Backup to tape performance
Marco,
I believe it’s time to open a support case and investigate this performance issue with our engineers. Please update this thread with the case ID if you have one.
I believe it’s time to open a support case and investigate this performance issue with our engineers. Please update this thread with the case ID if you have one.
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hello Dima,
yes, I've already done it: Case # 02063279
Thank you
Marco
yes, I've already done it: Case # 02063279
Thank you
Marco
-
- Enthusiast
- Posts: 38
- Liked: 2 times
- Joined: Nov 07, 2014 3:51 pm
- Full Name: Miles Lott
- Contact:
Re: EMC - poor GFS Backup to tape performance
Our performance is similar on an 8G FC network, DD2500, OS 5.4.2.1. I have heard that the copy to tape uses Ethernet instead of FC. I am not certain this is true.
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
After several days of test, log collection and webex, the last instruction that Veeam support provided us is to open a case to EMC support.
It's a very unpleasent situation, because we have verified that the problem is the interaction between Veeam backup software and EMC DD Boost feature.
I would have preferred that Veeam engineers had worked with EMC engineer, I hope they have a "preferred contact line" considering that they had developed a lot of feature togheter.
Shortening my thoughts, I'm very disappointed of this situation.
It's a very unpleasent situation, because we have verified that the problem is the interaction between Veeam backup software and EMC DD Boost feature.
I would have preferred that Veeam engineers had worked with EMC engineer, I hope they have a "preferred contact line" considering that they had developed a lot of feature togheter.
Shortening my thoughts, I'm very disappointed of this situation.
-
- Product Manager
- Posts: 14773
- Liked: 1719 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hello Marco,
I reviewed the case history and wanted to say thank you for opening the case with EMC support. I believe our support team together with EMC will find the root cause of this performance issue – investigation goes faster when all the vendors are involved.
I reviewed the case history and wanted to say thank you for opening the case with EMC support. I believe our support team together with EMC will find the root cause of this performance issue – investigation goes faster when all the vendors are involved.
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hello Dima,
I hope it will be as you said..
This is the last EMC response: https://support.emc.com/kb/470374
Issue
Read performance is poor on a DD2500 system.
Cause When using systems with no external shelves and only disks in the head unit, sustained read performance of 45-50MB/s is common. The reason for this low performance is that I/O bandwidth on such systems is extremely limited as they lack additional spindles/controllers across which multiple I/O operations can be performed in parallel. In addition, internal head-unit disks are shared with the DD OS operating system and as such reading of de-duplicated data from these disks also has to complete with normal I/O performed by the operating system (for example writing of log files and so on).
Note that in DD OS performance guides we publish figures for expected sustained read throughput on DD2500 systems (for example single stream reads via CIFS on DD OS 5.5 is shown at 129MB/s). However, these figures were obtained via testing on clean systems with a minimum of three external enclosures. As a result systems configured with less than 3 external enclosures will not be able to obtain similar throughput and as enclosure count drops corresponding read performance degrades rapidly.
Resolution RECOMMENDATIONS: To allow systems to achieve read performance loosely in line with figures documented in DD OS performance guides, DD2500 systems should be configured with a minimum of three external enclosures. Where this is not possible customers should be made aware of the likely impact to read performance to ensure that the system will still be able to meet baseline performance expectations.
WORKAROUND: For systems which have already been implemented with low shelf count and are experiencing poor read throughput, the first step is to perform normal investigations to rule out common causes of poor performance (such as issues in the environment or configuration of the DDR, file system fragmentation, system load, or poor data locality).
Once it has been determined that poor read performance is due to the systems becoming I/O bound during read, the only way in which performance can be improved is to:
Add additional external enclosures (to give increased I/O bandwidth)
Expire existing backup data and run clean (to remove existing data from the system)
Re-write backup data to the DDR
Note that if enclosures are added but data is not rewritten, the systems' read performance may not significantly improve. This is because the DDR will not automatically rebalance data across enclosures and as a result the majority of data will still be being read from existing enclosures so issues with a lack of I/O bandwidth to these existing enclosures will still be apparent.
For Example:
The documented CIFS single stream read performance for DD2500 using a 10Gbps network with the recommended 3 shelves is 129MB/s. However, the multiplier for having 0 external shelves is 0.34. From this we get:
129 * 0.34 = 43.86MB/s which is in line with the number reported in the "CAUSE" section above.
In our case, our read performance is about 1 MB/s (when we are lucky), and only with DDBoost feature, because if we use CIFS share the performance grow up to the fantastic value of 30 MB/s !
Our system is DD2200, I think it doesn't support shelf addiction..
I hope it will be as you said..
This is the last EMC response: https://support.emc.com/kb/470374
Issue
Read performance is poor on a DD2500 system.
Cause When using systems with no external shelves and only disks in the head unit, sustained read performance of 45-50MB/s is common. The reason for this low performance is that I/O bandwidth on such systems is extremely limited as they lack additional spindles/controllers across which multiple I/O operations can be performed in parallel. In addition, internal head-unit disks are shared with the DD OS operating system and as such reading of de-duplicated data from these disks also has to complete with normal I/O performed by the operating system (for example writing of log files and so on).
Note that in DD OS performance guides we publish figures for expected sustained read throughput on DD2500 systems (for example single stream reads via CIFS on DD OS 5.5 is shown at 129MB/s). However, these figures were obtained via testing on clean systems with a minimum of three external enclosures. As a result systems configured with less than 3 external enclosures will not be able to obtain similar throughput and as enclosure count drops corresponding read performance degrades rapidly.
Resolution RECOMMENDATIONS: To allow systems to achieve read performance loosely in line with figures documented in DD OS performance guides, DD2500 systems should be configured with a minimum of three external enclosures. Where this is not possible customers should be made aware of the likely impact to read performance to ensure that the system will still be able to meet baseline performance expectations.
WORKAROUND: For systems which have already been implemented with low shelf count and are experiencing poor read throughput, the first step is to perform normal investigations to rule out common causes of poor performance (such as issues in the environment or configuration of the DDR, file system fragmentation, system load, or poor data locality).
Once it has been determined that poor read performance is due to the systems becoming I/O bound during read, the only way in which performance can be improved is to:
Add additional external enclosures (to give increased I/O bandwidth)
Expire existing backup data and run clean (to remove existing data from the system)
Re-write backup data to the DDR
Note that if enclosures are added but data is not rewritten, the systems' read performance may not significantly improve. This is because the DDR will not automatically rebalance data across enclosures and as a result the majority of data will still be being read from existing enclosures so issues with a lack of I/O bandwidth to these existing enclosures will still be apparent.
For Example:
The documented CIFS single stream read performance for DD2500 using a 10Gbps network with the recommended 3 shelves is 129MB/s. However, the multiplier for having 0 external shelves is 0.34. From this we get:
129 * 0.34 = 43.86MB/s which is in line with the number reported in the "CAUSE" section above.
In our case, our read performance is about 1 MB/s (when we are lucky), and only with DDBoost feature, because if we use CIFS share the performance grow up to the fantastic value of 30 MB/s !
Our system is DD2200, I think it doesn't support shelf addiction..
-
- Product Manager
- Posts: 14773
- Liked: 1719 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi Marco,
I checked with our support team. I was told that a bug was found at EMC side and, as far as I am concerned, they are investigating it. Please let us know how it goes or if any assistance from our side is required. Thank you.
I checked with our support team. I was told that a bug was found at EMC side and, as far as I am concerned, they are investigating it. Please let us know how it goes or if any assistance from our side is required. Thank you.
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hello Dmitry,
it's correct; the issue has been reproduced in EMC labs and it's under investigation now.
I'll update this post when I'll have news.
Thank you
Marco
it's correct; the issue has been reproduced in EMC labs and it's under investigation now.
I'll update this post when I'll have news.
Thank you
Marco
-
- Novice
- Posts: 4
- Liked: 1 time
- Joined: Mar 27, 2012 9:00 am
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi All,
any updates here? Have same Problem, newest DDOS and Tapeperformance arount 1-2MB/s . It seems that there are tranfers only from time to time (peaks are high but timeframes with no transfers are very long so avg Speed is 1-2MB/s). If i did the Same Job from local Disk it goes up to 120MB/s . I know that dehydration takes his time with the older Datadomains but excpected speed is around 20-30 MB/s.
Thanks for any update.
Sven
any updates here? Have same Problem, newest DDOS and Tapeperformance arount 1-2MB/s . It seems that there are tranfers only from time to time (peaks are high but timeframes with no transfers are very long so avg Speed is 1-2MB/s). If i did the Same Job from local Disk it goes up to 120MB/s . I know that dehydration takes his time with the older Datadomains but excpected speed is around 20-30 MB/s.
Thanks for any update.
Sven
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hello,
we have found a workaround with the EMC support:
#system show serialno
#priv set se
reg set system.NFS_TCP_SNDSIZE=2097168
#filesys restart
These commands increase the windows transmission from 1 MB (default) to 2 MB and restart the filesystem.
It seems to work for me!
Bye
Marco S.
we have found a workaround with the EMC support:
#system show serialno
#priv set se
reg set system.NFS_TCP_SNDSIZE=2097168
#filesys restart
These commands increase the windows transmission from 1 MB (default) to 2 MB and restart the filesystem.
It seems to work for me!
Bye
Marco S.
-
- Novice
- Posts: 4
- Liked: 1 time
- Joined: Mar 27, 2012 9:00 am
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi,
thanks for fast reply .. will try it instantly ..
Thanks & Bye
Sven
thanks for fast reply .. will try it instantly ..
Thanks & Bye
Sven
-
- Novice
- Posts: 4
- Liked: 1 time
- Joined: Mar 27, 2012 9:00 am
- Contact:
Re: EMC - poor GFS Backup to tape performance
it Works .. now writespeed is around 50MB/s with a 7 Disk DD2200 ... Thanks for sharing this Information
Sven
Sven
-
- Veeam Legend
- Posts: 351
- Liked: 36 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: EMC - poor GFS Backup to tape performance
You're welcome! have a nice day!
-
- Product Manager
- Posts: 14773
- Liked: 1719 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: EMC - poor GFS Backup to tape performance
Hi folks,
Thanks a lot for sharing and glad to hear that this performance issue was resolved!
Thanks a lot for sharing and glad to hear that this performance issue was resolved!
Who is online
Users browsing this forum: Google [Bot] and 48 guests