-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
#01715660 for us. 10G network infrastructure for the most part, Dell TL2000 6G SAS connected to Dell PE430 physical server, pulling from EMC Data Domain and Dell DR dedupe devices - DR's are 10G and DD's are 1G.
We were seeing around 20-30 or so MB/s files to tape - don't even try GFS w/ a dedupe source because of the transformations it does, those were 2-3 MB/s, stick to file to tape and live w/ it - I think our DR file to tape jobs (10G) were maybe around 50-60 MB/s.
Before anyone scoffs it off to pulling files from a dedupe device please, we can robocopy files from dedupe to the local tape server HDD at around 100 MB/s from the old 1G connected DD's and ~200-375 MB/s from our 10G connected DR's. Network speed appears to be there. We can backup files to tape from tape server HDD to tape at around 160 MB/s, so tape speed appears to be there (hardware encryption and compression enabled since, they're tapes, and coming from dedupe they're not compressed). Pulling across the network to tape, dismal speeds.
I've been working w/ patient and helpful first tier support for a week or two to experiment w/ different block sizes at the tape drive w/ mixed and inconclusive results. Overall it seemed like V9's default of 1MB blocks might be too large since it seemed to perform better in our setup at somewhere around middle of the road block size. We're currently set at 131072 in Veeam drive properties but per below, I think one of the registry hacks a higher engineer did was to set a different block size. Our fresh V9 install had set them to 1048576. Using Veeams tape performance testing utility did seem to confirm that on average somewhere around middle or lower block size gave best overall performance.
Last week support graciously got us escalated to an engineer, whom after a webex, very respectfully, threw a slew of registry hacks, and jacked w/ the tape server's network card, and got us down to around 12 MB/s copying files from the network to the tape server HDD, and around 823 KB/s tape speed, at which point he declared we clearly have network issues that we need to straighten out and call them back.
After putting back the tape server's NIC to where it was before the webex, we have our network speed back and surprisingly this morning our tape jobs are running pretty decent speed! Around 109-133 MB/s off our 10G DR's, we were getting much lower before. I don't know if it was experimenting w/ block size in Veeam or all the registry hacks that did it, but we're doing reasonably well this morning. I would argue it was not our network since there was no real change there (I had to put it back the way it was before the engineer). I'm going to let the monthly tape jobs run and see how overall performance comes out, and may even try the next smaller block size to see what difference that makes. We're much better, though I can't say what exactly may have fixed it other than try experimenting w/ block size, run a few jobs, and see what works best for you.
We were seeing around 20-30 or so MB/s files to tape - don't even try GFS w/ a dedupe source because of the transformations it does, those were 2-3 MB/s, stick to file to tape and live w/ it - I think our DR file to tape jobs (10G) were maybe around 50-60 MB/s.
Before anyone scoffs it off to pulling files from a dedupe device please, we can robocopy files from dedupe to the local tape server HDD at around 100 MB/s from the old 1G connected DD's and ~200-375 MB/s from our 10G connected DR's. Network speed appears to be there. We can backup files to tape from tape server HDD to tape at around 160 MB/s, so tape speed appears to be there (hardware encryption and compression enabled since, they're tapes, and coming from dedupe they're not compressed). Pulling across the network to tape, dismal speeds.
I've been working w/ patient and helpful first tier support for a week or two to experiment w/ different block sizes at the tape drive w/ mixed and inconclusive results. Overall it seemed like V9's default of 1MB blocks might be too large since it seemed to perform better in our setup at somewhere around middle of the road block size. We're currently set at 131072 in Veeam drive properties but per below, I think one of the registry hacks a higher engineer did was to set a different block size. Our fresh V9 install had set them to 1048576. Using Veeams tape performance testing utility did seem to confirm that on average somewhere around middle or lower block size gave best overall performance.
Last week support graciously got us escalated to an engineer, whom after a webex, very respectfully, threw a slew of registry hacks, and jacked w/ the tape server's network card, and got us down to around 12 MB/s copying files from the network to the tape server HDD, and around 823 KB/s tape speed, at which point he declared we clearly have network issues that we need to straighten out and call them back.
After putting back the tape server's NIC to where it was before the webex, we have our network speed back and surprisingly this morning our tape jobs are running pretty decent speed! Around 109-133 MB/s off our 10G DR's, we were getting much lower before. I don't know if it was experimenting w/ block size in Veeam or all the registry hacks that did it, but we're doing reasonably well this morning. I would argue it was not our network since there was no real change there (I had to put it back the way it was before the engineer). I'm going to let the monthly tape jobs run and see how overall performance comes out, and may even try the next smaller block size to see what difference that makes. We're much better, though I can't say what exactly may have fixed it other than try experimenting w/ block size, run a few jobs, and see what works best for you.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
Forgot, files to tape job, pulling .vbk's, anywhere from 12-13G to 4-5TB, only around as much 2-3 files at a time. Never seemed to matter what size, same performance. As mentioned though this mornign we're at least in three digit speed, not quite 160 MB/s of LTO-6 yet but way better than 20-30 MB/s.
At that, it would be great if GFS jobs simply did a straight file pull rather than synthetic transform. Kind of like a file to tape GFS job.
At that, it would be great if GFS jobs simply did a straight file pull rather than synthetic transform. Kind of like a file to tape GFS job.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 01, 2016 3:31 pm
- Full Name: matt smith
- Contact:
Re: v9 - Tape performance reports
Thanks, as requested
0. Support case ID
01742008
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives)
Physical backup server backing up to SAN with tape drive connected direct to backup server using LTO5 tapes tape drive is a Dell TL2000
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore)
Backup to tape
3. Bottleneck stats from job details
Source 0% > Proxy 0% > Network 0% > Target 72%
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough)
18 objects transferring around 17.3gb 19h19m @15mbs
5. Any other useful information:
When backing up to tape it looks as if data rate is being capped at 14mbs
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Apr 15, 2015 7:19 am
- Full Name: Dan
- Contact:
Re: v9 - Tape performance reports
0. Support case ID - 01745678
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives)
- HP StoreEver MSL4048 G3 Series LT05 Tape Library, connected to a backup proxy server
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore)
- Backup to Tape
3. Bottleneck stats from job details
- Source 0% > Proxy 9% > Network 0% > Target 0%
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough)
- Size of the file - Around 600GB
5. Any other useful information:
When backing up to tape backups don't seem to go higher than 4mb/s
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives)
- HP StoreEver MSL4048 G3 Series LT05 Tape Library, connected to a backup proxy server
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore)
- Backup to Tape
3. Bottleneck stats from job details
- Source 0% > Proxy 9% > Network 0% > Target 0%
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough)
- Size of the file - Around 600GB
5. Any other useful information:
When backing up to tape backups don't seem to go higher than 4mb/s
-
- Enthusiast
- Posts: 83
- Liked: 9 times
- Joined: Oct 31, 2013 5:11 pm
- Full Name: Chris Catlett
- Contact:
Re: v9 - Tape performance reports
Following this article and setting the reg key gave me a 30% increase in tape speed.
Before: 30Mbit/s
After: 50Mbit/s (with spikes to 65Mbit/s)
http://www.v-strange.de/index.php/veeam ... o-veeam-v8
Locally attached (SCSI) LTO4 Autoloader, reading from a synology RS2414RP+ on the same switch.
Before: 30Mbit/s
After: 50Mbit/s (with spikes to 65Mbit/s)
http://www.v-strange.de/index.php/veeam ... o-veeam-v8
Locally attached (SCSI) LTO4 Autoloader, reading from a synology RS2414RP+ on the same switch.
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: v9 - Tape performance reports
I'm not entirely sure I've "lost" performance after upgrading to v9, because we've only just migrated to Veeam, but we're also getting pretty poor transfer speeds to Tape.
Tape Drive: SAS connected Dell TL4000 with 2 x LTO6 drives
Job Type: Backup to Tape
Bottleneck Stats: 0% on everything except 4% on target
Average file size: About 100GB I guess, we have some 3TB files from the exchange server, and lots of 40GB for basic windows servers
Tape drives and SAN repo are connected to the same machine
With a single job using both drives in parallel, we get about 60Mb/s sustained, so 30MB/s for each drive. BE was doing double that when it was reading directly from the SAN over FC, this is from a dedicated SAN that holds out backups. Oddly even the jobs we're putting to tape from remote repos (over 1Gb WAN connections) run at the same speed
I've haven't logged a job yet as I still have some testing to do, but so far it doesn't look good
Tape Drive: SAS connected Dell TL4000 with 2 x LTO6 drives
Job Type: Backup to Tape
Bottleneck Stats: 0% on everything except 4% on target
Average file size: About 100GB I guess, we have some 3TB files from the exchange server, and lots of 40GB for basic windows servers
Tape drives and SAN repo are connected to the same machine
With a single job using both drives in parallel, we get about 60Mb/s sustained, so 30MB/s for each drive. BE was doing double that when it was reading directly from the SAN over FC, this is from a dedicated SAN that holds out backups. Oddly even the jobs we're putting to tape from remote repos (over 1Gb WAN connections) run at the same speed
I've haven't logged a job yet as I still have some testing to do, but so far it doesn't look good
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
Blargh.. Today found some old .vbk files floating around on plain old Windows share (VM, Equallogic SAN, all 10Gbps infrastructure, LTO-6 tape). No problem, it was one of my old proxies; added it back to the console as a Windows Server, created a file to tape job, and pointed to the proxy's HDD (server/local HDD as source). At 2:20 hr/min now, 54MB/s. Around 5-10 .vbk files, 1.4TB total. Ugh. Oh well. My other gripe, and I haven't checked if U1 fixes it (not yet installed, waiting for this job to finish haha), is the Statistics window only shows read speed!!! W/ utmost love and respect to all, I don't care how fast it reads, I only want to know how fast it writes to tape.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Veteran
- Posts: 511
- Liked: 68 times
- Joined: Oct 17, 2014 8:09 am
- Location: Hypervisor
- Contact:
Re: v9 - Tape performance reports
Our current configuration on one of our VBR servers:
Veeam B&R v9.0.0.1491 installed on a physical Windows Server 2012 R2 (ProLiant DL380 G6) and an MSL4048 HP Library currently filled with 2x LTO4 tape drives FC connected. One 'File to Tape Backup' Job and one 'File from Tape Restore' Job was run. The backup files are located on the same VBR 9.x Server.
Unfortunately I cannot restore files from LTO6 tape media written with VBR 8.x at the moment, as the library is still filled with LTO4 drives for a couple of days. Then I could try to restore VBR 8.x LTO6 tape media with VBR 9.x !
Here are the results for backup and restore on LTO4 with VBR 9.x ...
Pretty good, if you ask me.
Write and Restore Performance on a physical VBR 8.x Server onto LTO6 tape media, which is located in an MSL2024 SAS-tape-library gives me 160MB/s troughput by the way.
Regards,
Didi7
Veeam B&R v9.0.0.1491 installed on a physical Windows Server 2012 R2 (ProLiant DL380 G6) and an MSL4048 HP Library currently filled with 2x LTO4 tape drives FC connected. One 'File to Tape Backup' Job and one 'File from Tape Restore' Job was run. The backup files are located on the same VBR 9.x Server.
Unfortunately I cannot restore files from LTO6 tape media written with VBR 8.x at the moment, as the library is still filled with LTO4 drives for a couple of days. Then I could try to restore VBR 8.x LTO6 tape media with VBR 9.x !
Here are the results for backup and restore on LTO4 with VBR 9.x ...
Pretty good, if you ask me.
Write and Restore Performance on a physical VBR 8.x Server onto LTO6 tape media, which is located in an MSL2024 SAS-tape-library gives me 160MB/s troughput by the way.
Regards,
Didi7
Using the most recent Veeam B&R in many different environments now and counting!
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
Having everything all located on the same physical server - and FC connected! - , I wish we could that here. But, alas, we have to pull our source files from across the local network.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Veteran
- Posts: 511
- Liked: 68 times
- Joined: Oct 17, 2014 8:09 am
- Location: Hypervisor
- Contact:
Re: v9 - Tape performance reports
Well, whether tape drives are FC connected or SAS connected, really doesn't make any difference, as even LTO6 doesn't work faster than 160MB/s, so I get the maximum on VBR 8.x with LTO6. I was even more impressed at how fast LTO4 is on VBR 8.x server with a local backup repository and data, which is saved to tape media, which is physically connected to the VBR-server itself.
Using the most recent Veeam B&R in many different environments now and counting!
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: v9 - Tape performance reports
We have a job open and tape through put from repository is rubbish.
Case #01719917
We did find some minor things when looking very closely and have resolved those, but the likely hood of these minor issues affecting tape through put is unlikely.
Case #01719917
We did find some minor things when looking very closely and have resolved those, but the likely hood of these minor issues affecting tape through put is unlikely.
-
- Veteran
- Posts: 511
- Liked: 68 times
- Joined: Oct 17, 2014 8:09 am
- Location: Hypervisor
- Contact:
Re: v9 - Tape performance reports
The tape or library is connected how to the Veeam Backup Server or is the Veeam Backup server virtual? Is the repository data local to the Veeam backup server. What kind of tape hardware do you have and how is it connected to the Veeam Backup server?
Using the most recent Veeam B&R in many different environments now and counting!
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: v9 - Tape performance reports
Hi.
The tape library is a 4 Drive lto 6 library sas attached to two cards , so library is logically partitioned.
The servers is a dl 380 and its ownly role in life is to service veeam tape jobs.
Its connected via an LACP cisco ether channels to switches(2 nics) which are then ether channeled to the switches core switches (2 nics) where the repository Server is based and storage is san attached.
Its all veeam 9 and latest patch levels.
Testing has shown that backup copy jobs between these two servers the network is not the issue....the jobs fly.(we did this by setting a temp repository on tape Server and then from repository server sent the data) so the only different thing in the equation is the Tape device and veeam tape data mover mechanism....its very slow.
Cheers
Steve
The tape library is a 4 Drive lto 6 library sas attached to two cards , so library is logically partitioned.
The servers is a dl 380 and its ownly role in life is to service veeam tape jobs.
Its connected via an LACP cisco ether channels to switches(2 nics) which are then ether channeled to the switches core switches (2 nics) where the repository Server is based and storage is san attached.
Its all veeam 9 and latest patch levels.
Testing has shown that backup copy jobs between these two servers the network is not the issue....the jobs fly.(we did this by setting a temp repository on tape Server and then from repository server sent the data) so the only different thing in the equation is the Tape device and veeam tape data mover mechanism....its very slow.
Cheers
Steve
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
Hey Steve, do I understand you to indicate network source-->taper server=fast; tape server-->tape=fast; network source-->tape (directly to tape bypassing local tape server storage)=slow?
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: v9 - Tape performance reports
The Physical Server with SAN attached repositories sits on core switches this Server has an lacp cisco therchannel set up(2 nics).
The core switches are connected via a cisco etherchannel to the edge switches where the physical server resides and has the tape library attached.
This Server does no other veeam role other than Tape jobs.
Testing has shown via the backup copy from the main repository server to this tape server (by creating a test veeam repository)job that the network can shift enough data to provide the tape devices enough data.
Unfortunately the tape jobs run very slowly....the throughput stays well under what the backup copy job can do.
This library is a 4 drive library with 4 lto6 drives in it.(we are only using two at the moment for this testing.
It appears that the tape data mechanism for moving data is not very efficient.
How can we tell this....well
We are still using our backup exec infrastructure on another library.(dual lto6)
This backs up data of same repository server (not at same time as other library) to a server with the library attached and backup exec. This is backup exec 2012 which we want to decommission but looks like we cannot . The data is backed up to this box with no issues of network.
The core switches are connected via a cisco etherchannel to the edge switches where the physical server resides and has the tape library attached.
This Server does no other veeam role other than Tape jobs.
Testing has shown via the backup copy from the main repository server to this tape server (by creating a test veeam repository)job that the network can shift enough data to provide the tape devices enough data.
Unfortunately the tape jobs run very slowly....the throughput stays well under what the backup copy job can do.
This library is a 4 drive library with 4 lto6 drives in it.(we are only using two at the moment for this testing.
It appears that the tape data mechanism for moving data is not very efficient.
How can we tell this....well
We are still using our backup exec infrastructure on another library.(dual lto6)
This backs up data of same repository server (not at same time as other library) to a server with the library attached and backup exec. This is backup exec 2012 which we want to decommission but looks like we cannot . The data is backed up to this box with no issues of network.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 25, 2016 5:08 pm
- Full Name: Chris Ma
Re: v9 - Tape performance reports
We're experiencing performance issues too. As requested:
0. Support case ID
01778189
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives)
StorageTek SL24 Tape Autloader, direct attached to Dell PE1850 server (Windows Server 2008 R2 Standard), LTO4 tape drive
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore)
file to tape
3. Bottleneck stats from job details
Source: 0%
Proxy: 4%
Network: 0%
Target: 27%
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough)
Average file size 133GB and approximately 20 files
5. Any other useful information
Prior to v9 u1, the backup job was averaging 80MB/s. We are now average 25MB/s
Observation: Not sure if this is related or coincidental, prior to upgrade to v9 u1, our Bottleneck stats always had stats for all 4 areas (Source, Proxy, Network, Target). With v9 u1, we only appear to have stats for Proxy and Target; Source and Network are always 0%. Is there a new method/algorithm in v9 for measuring these stats?
0. Support case ID
01778189
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives)
StorageTek SL24 Tape Autloader, direct attached to Dell PE1850 server (Windows Server 2008 R2 Standard), LTO4 tape drive
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore)
file to tape
3. Bottleneck stats from job details
Source: 0%
Proxy: 4%
Network: 0%
Target: 27%
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough)
Average file size 133GB and approximately 20 files
5. Any other useful information
Prior to v9 u1, the backup job was averaging 80MB/s. We are now average 25MB/s
Observation: Not sure if this is related or coincidental, prior to upgrade to v9 u1, our Bottleneck stats always had stats for all 4 areas (Source, Proxy, Network, Target). With v9 u1, we only appear to have stats for Proxy and Target; Source and Network are always 0%. Is there a new method/algorithm in v9 for measuring these stats?
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: v9 - Tape performance reports
What's your source, Chris (CIFS share, Windows share, direct SAN, etc. on the network or the local tape server HDD?) Reason I ask is because U1 introduced a bug in CIFS shares getting piped to tape. Don't know if it would have affected speed, but if you look at the properties of a file to tape job that's sourced off a CIFS share it can lock the domain account credentials.
I do hope they fixed the goofy tape speed calculator in U1 though, while it was usually reasonably accurate while watching resource monitor, sometimes my LTO-6 would randomly report 320+ MB/s! We wish.
I do hope they fixed the goofy tape speed calculator in U1 though, while it was usually reasonably accurate while watching resource monitor, sometimes my LTO-6 would randomly report 320+ MB/s! We wish.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 25, 2016 5:08 pm
- Full Name: Chris Ma
Re: v9 - Tape performance reports
Hi rreed, thanks for the info on the CIFS bug. I'm aware of the bug as it impacted one of our backup jobs. We have since received the hotfix and that resolved that issue.
I have been working with Veeam support with our performance issue. After reviewing our logs, they have confirmed our performance slowness and had me check the following:
1. Please edit your File to Tape job and under Options, do you have Microsoft Volume Shadow Copy enabled?
No, it is not enabled.
2. Under Tape Infrastructure, Right click on your tape library and go to properties, do you have Use Native SCSI commands instead of Windows Driver enabled?
No, it is not enabled.
3. Under Tape Infrastructure, right click on Media Pool used for this job and go to properties. Under Options, do you have Parallel Processing for jobs using this media pool enabled?
No, it is not enabled.
I'm waiting to hear back.
Chris
I have been working with Veeam support with our performance issue. After reviewing our logs, they have confirmed our performance slowness and had me check the following:
1. Please edit your File to Tape job and under Options, do you have Microsoft Volume Shadow Copy enabled?
No, it is not enabled.
2. Under Tape Infrastructure, Right click on your tape library and go to properties, do you have Use Native SCSI commands instead of Windows Driver enabled?
No, it is not enabled.
3. Under Tape Infrastructure, right click on Media Pool used for this job and go to properties. Under Options, do you have Parallel Processing for jobs using this media pool enabled?
No, it is not enabled.
I'm waiting to hear back.
Chris
-
- Enthusiast
- Posts: 35
- Liked: 7 times
- Joined: Jun 24, 2013 9:43 am
- Full Name: Hussain Mahfood
- Contact:
Re: v9 - Tape performance reports
+1
I am facing same issue even with V9 update 1 max 67MB processing compared to other softwares tape backup 120MB
Tapes: LTO5
Tape Library: TL2000
I am facing same issue even with V9 update 1 max 67MB processing compared to other softwares tape backup 120MB
Tapes: LTO5
Tape Library: TL2000
-
- Enthusiast
- Posts: 85
- Liked: 31 times
- Joined: Apr 22, 2016 1:06 am
- Full Name: Steven Meier
- Contact:
Re: v9 - Tape performance reports
we are still working on tape issues ..I am beginning to get very concerned that there does not seem to be a veeam post about this issue.
It is obviously affecting a number of folks and symptoms seem very similar.
This lack of communication is a worry.
It is obviously affecting a number of folks and symptoms seem very similar.
This lack of communication is a worry.
-
- Product Manager
- Posts: 14726
- Liked: 1707 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: v9 - Tape performance reports
Hi Steven,
We are collecting and reviewing every single report in this thread. While some ‘configuration issues’ can be resolved together with the support team, DEVs are now working on improving the overall performance for all existing tape functionality (F2T, B2T and GFS jobs), so all these posts and support cases help a lot to identify the exact areas that needs to be investigated by R&D folks.
We are collecting and reviewing every single report in this thread. While some ‘configuration issues’ can be resolved together with the support team, DEVs are now working on improving the overall performance for all existing tape functionality (F2T, B2T and GFS jobs), so all these posts and support cases help a lot to identify the exact areas that needs to be investigated by R&D folks.
-
- Expert
- Posts: 106
- Liked: 11 times
- Joined: Jun 20, 2009 12:47 pm
- Contact:
Re: v9 - Tape performance reports
0. 01796032
1. Quantum Scalar i500 with LTO7 drives connected via FC over Brocade switches. Repository is on IBM V3700 SAN with 12 SAS drives. Different FC HBAs for SAN and library
2. Backup from tape to repositry
3. Restore session does not have bottleneck display
4. One 3.4TB VBK
5. Ressource montor in windows tells me high disk usage, but no actually data transfer (I/O 100% on repository disk, but no actually bytes transferred for a long period of time)
1. Quantum Scalar i500 with LTO7 drives connected via FC over Brocade switches. Repository is on IBM V3700 SAN with 12 SAS drives. Different FC HBAs for SAN and library
2. Backup from tape to repositry
3. Restore session does not have bottleneck display
4. One 3.4TB VBK
5. Ressource montor in windows tells me high disk usage, but no actually data transfer (I/O 100% on repository disk, but no actually bytes transferred for a long period of time)
-
- Product Manager
- Posts: 14726
- Liked: 1707 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: v9 - Tape performance reports
McClane,
Can you clarify the average performance rate in your case?
Can you clarify the average performance rate in your case?
-
- Expert
- Posts: 106
- Liked: 11 times
- Joined: Jun 20, 2009 12:47 pm
- Contact:
Re: v9 - Tape performance reports
It seems that the performance counters have shown false data for about an hour. After that the rate was constantly 150MB/s for the next 6 hours. That only happens with big jobs. A 100GB restore displayed the over 100MB/s from the beginning.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Jun 16, 2016 12:38 pm
- Full Name: Kees Voortman
- Contact:
[MERGED] Backup files to tape very very slow
Hello Veeam users,
I am using VEEAM Backup and Replication version 9.0.0.1491.
I'm backing up 2 folders to tape in a backup job.
One folder contains all VM backups.
The other folder contains about 400GB with tons of individual files (pdf, xls, doc, txt etc etc).
When backing up these files to tape I have to stop the job cause it takes sooooooooo long.
Withouth this folder the backup of the folder with VM's is during about 2 hours, so something's got wrong with the file folder....
Does anyone have an idea ?
Thank you in advance.
I am using VEEAM Backup and Replication version 9.0.0.1491.
I'm backing up 2 folders to tape in a backup job.
One folder contains all VM backups.
The other folder contains about 400GB with tons of individual files (pdf, xls, doc, txt etc etc).
When backing up these files to tape I have to stop the job cause it takes sooooooooo long.
Withouth this folder the backup of the folder with VM's is during about 2 hours, so something's got wrong with the file folder....
Does anyone have an idea ?
Thank you in advance.
-
- Product Manager
- Posts: 14726
- Liked: 1707 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
-
- Enthusiast
- Posts: 26
- Liked: 2 times
- Joined: Jun 09, 2016 11:20 am
- Full Name: Nicolas FOUGEROUX
- Contact:
[MERGED] File to Tape Job is very slow to backup CIFS Shared
Hello,
I have a lot of file to save in full. 1 T 5 of files... It takes more that 1 million of files ... to save
It is very slow to backup all this files .... Veeam is not really fast for that... no drivers...
How can i proceed ?
I have a lot of file to save in full. 1 T 5 of files... It takes more that 1 million of files ... to save
It is very slow to backup all this files .... Veeam is not really fast for that... no drivers...
How can i proceed ?
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: v9 - Tape performance reports
Can you please provide a bit more on your setup (see Dima's post above)? Also what version of Veeam B&R are you using?
-
- Enthusiast
- Posts: 26
- Liked: 2 times
- Joined: Jun 09, 2016 11:20 am
- Full Name: Nicolas FOUGEROUX
- Contact:
Re: v9 - Tape performance reports
It is the last version 9.0.0.1715.. It is a VM the server Veeam.
0. Support case ID
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives) - PV124T connected to a physical W2008 LTO4 drive
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore) - File To tape Job
3. Bottleneck stats from job details - Proxy
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough) - 1T4, 1 million 285000 files
5. Any other useful information
Source : 0 %
Proxy : 35 %
?etwork 0 %
Target 0 %
0. Support case ID
1. Background on your infrastructure setup (i.e. type of the library, how it's connected to tape proxy, generation of tape drives) - PV124T connected to a physical W2008 LTO4 drive
2. What type of tape job were you using during the performance testing (file to tape / backup to tape / file from tape restore / backup from tape restore) - File To tape Job
3. Bottleneck stats from job details - Proxy
4. Average size of the file and number of backed up/restored files (a rough estimate is good enough) - 1T4, 1 million 285000 files
5. Any other useful information
Source : 0 %
Proxy : 35 %
?etwork 0 %
Target 0 %
-
- Veteran
- Posts: 387
- Liked: 97 times
- Joined: Mar 24, 2010 5:47 pm
- Full Name: Larry Walker
- Contact:
Re: v9 - Tape performance reports
LTO 7 drive with LTO 7 7 tape, local disk as source backup speed 303 MBS
LTO 7 drive with LTO6 tape Local disk as source backup - speed 153MBS
LTO 4 driveLTO4 tape local or remote server 53MBS
LTO 7 drive/Tape 6 remote server backup 148 mbs . I got this after being sure data stayed on 10 gig lan.Tape job would use the wrong path given a chance.Used IP addresses for everything (source,tape server)
CPU on tape server backing up local VeeamAgent.exe < 2% - Server idle.
CPU when backing up from remote. VeeamAgent.exe (remote server source CPU 25-30%) (tape server CPU 6%) not sure why the high CPU when I am doing Files and Folders to Tape. Picked remote server from drop down.
CPU when backing up from remote using unc like \\174.46.44.248\f$\Backups when selecting source.When picking source server I click add then type unc path. VeeamAgent.exe (remote server source 0%) (tape server 0%) both servers light cpu load when typing unc and not using pull down. LTO 7 drive/Tape 6 remote server backup 146 mbs.
My test files are Veeam backups about 1tb in 40 files, using file and folders to tape.
I switched my tape backups to Veeam backup copy to local disk, then local disk to tape. Both jobs finish before one tape from remote. The good side effect is now I get all my restore points on tape. My main speed issue was the data would take a 1 gig link at times and using all IP addresses for names fixed that. But I see where at night with other Veeam jobs running High CPU may be an issue.
Just sharing.
LTO 7 drive with LTO6 tape Local disk as source backup - speed 153MBS
LTO 4 driveLTO4 tape local or remote server 53MBS
LTO 7 drive/Tape 6 remote server backup 148 mbs . I got this after being sure data stayed on 10 gig lan.Tape job would use the wrong path given a chance.Used IP addresses for everything (source,tape server)
CPU on tape server backing up local VeeamAgent.exe < 2% - Server idle.
CPU when backing up from remote. VeeamAgent.exe (remote server source CPU 25-30%) (tape server CPU 6%) not sure why the high CPU when I am doing Files and Folders to Tape. Picked remote server from drop down.
CPU when backing up from remote using unc like \\174.46.44.248\f$\Backups when selecting source.When picking source server I click add then type unc path. VeeamAgent.exe (remote server source 0%) (tape server 0%) both servers light cpu load when typing unc and not using pull down. LTO 7 drive/Tape 6 remote server backup 146 mbs.
My test files are Veeam backups about 1tb in 40 files, using file and folders to tape.
I switched my tape backups to Veeam backup copy to local disk, then local disk to tape. Both jobs finish before one tape from remote. The good side effect is now I get all my restore points on tape. My main speed issue was the data would take a 1 gig link at times and using all IP addresses for names fixed that. But I see where at night with other Veeam jobs running High CPU may be an issue.
Just sharing.
Who is online
Users browsing this forum: No registered users and 11 guests