- Service Provider
- Posts: 2
- Liked: 1 time
- Joined: Jun 25, 2018 3:14 am
- Full Name: Dean Foley
- Location: New Zealand
We're having tape performance issues since update 4.
Our largest job by data, 140 servers, approximately 13TB of data, going to LTO6 tape on Dell PowerVault TL4000 (which are re-badged IBM 3573-TL) with 4 drives in each library.
The backups are per-vm on scaleout repositories
The processing rate is fine, around 125MB/s
What I am seeing though, is between each VM going to tape, an almost uniform 15 mins between each server, where nothing is being written to tape, multiply that by 140 servers, and we're looking at around 35 hours over the weekend, where nothing is being written to tape, and we are now blowing out our tape window by almost a day. I don't recall seeing these 15 minute intervals, or maybe I never noticed them, is this a new thing? I believe it has something to do with building the next synthetic restore point?
Prior to update 4, the job was taking anywhere between 30 and 50 hours to complete. It is now taking over 80 hours
And about a month prior to installing Update 4, we upgraded the repository, from spinning disk, to SSD, so you would expect some performance improvement, not the other way around.
Have tried an active full on the job, removing the backup folders from job and re-scanning repository, and re-adding folders to job.
We have yet to upgrade to 4a, we've been advised to, but we haven't really been given the reason why. Quote from support "I did find that the tape logic was changed in update 4 then some of the logic was corrected and adjusted back in the hotfix patch 4a". What are these changes?
Of course we will update in good time. As we are a service provider, and have change control procedures, we like to leave it for a few weeks, to see if there are any issues before upgrade.
Case reference is 03468837
- Product Manager
- Posts: 11268
- Liked: 958 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
I do see that QA folks are investigating the performance decrease, so please keep working with our support team. Cheers!
Users browsing this forum: Google [Bot] and 2 guests