-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jun 27, 2011 1:50 pm
- Full Name: ISExpress
- Contact:
Poor performance from D2D2Tape backup utilising Veeam v6
Hi
I am currently investigating some performance issues we are having backing up to Disk using Veeam and then backing up to tape using Symantec Backup Exec 2010 R3. We have recently added some more critical servers into our Veeam backup job and are now running into problems.
We use a Physical Windows 2008 32bit server as our backup server. Single Quad core with 4GB RAM. This has direct 4gb fibre connection to a NexSAN SATABOY which has dual controllers which is our backup storage device. This is configured in RAID6 and has one 11TB Volume. The dual controllers on SATABOY support 4Gb and on the backup server we have 2 fibre cards which are 8GB. The solution is designed to be fault tolerant so any faults should be alleviated with alternative paths.
We have approx 25 VMs which we backup using Veeam B&R v6. We have 1 Veeam job which kicks off at 8pm, we use the backup to SAN option but find the processing rate to be poor. It takes approx 40-50mb/s. On a good day this can get to about 65mb/s. All our VMs bar 2 have Change blocked tracking enabled so incremental backups should be faster.
The backup server fibre plugs into 2 SAN fibre switches (with redundant paths) and the ESX hosts also connect into these fibre switches. When Veeam backups and restore it should be using the fibre connectivity to directly connect to the SAN eliminating any host involvement to get better performance.
I am trying to understand why we are getting these bottlenecks because when we backup to tape we backup data which sits on the SATABoy which is directly connected to fibre to our Windows backup server and again we are getting poor performance from Backup Exec. The tape library is LTO-5 and we are using a 6GB SAS card which is in the backup server to connect to the tape library. We found when we upgraded from LTO-4 to LTO-5 the performance was exactly the same. Basically we are getting 4,300mb/s when with our design of our backup infrastructure we should be getting at least double this.
Things we have tried
• Copying the data we normally backup from the SATAboy to the backup server local disks which are 450GB SAS disks and find when we backup to tape from this location we are finding it’s a lot quicker takes half the time to backup to tape. The speed per minute doubles to about 8000 mb/s.
• Copying a file/folder from the SATABOY to local disk on the backup server and getting approx 60mb/s (this is through fibre!!). As a test we unplugged the Ethernet to ensure it wasn’t using Ethernet to copy this file and when we plugged the cables out the copy carried on which proved it is using the fibre connection.
• We have done a direct fibre connection from the SATABoy to the backup server and noticed a slight improvement but nowhere near what we expected. We basically achieved 115mb/s going directly with SAN Fibre switches.
• We have also updated the server firmware, all drivers. It’s an IBM server so we have used updateXpress to update all components of the server.
Things we are looking to try next
• Upgrade the firmware on the SATABOY
• Upgrade the firmware on the SAN fibre switches
• Upgrading the server to 64-bit Windows 2008 R2, install approx another 16gb ram and maybe another processor but ideally don’t want to do this as it’s a massive piece of work and we don’t see what significant gain we will get by doing this.
I am hoping someone who may have had similar problem like this can assist. I apoligise that in advance that its all not directly related to Veeam but believe if I can sort the issues out with Veeam and/or hardware, the rest will also get sorted.
Thank you
I am currently investigating some performance issues we are having backing up to Disk using Veeam and then backing up to tape using Symantec Backup Exec 2010 R3. We have recently added some more critical servers into our Veeam backup job and are now running into problems.
We use a Physical Windows 2008 32bit server as our backup server. Single Quad core with 4GB RAM. This has direct 4gb fibre connection to a NexSAN SATABOY which has dual controllers which is our backup storage device. This is configured in RAID6 and has one 11TB Volume. The dual controllers on SATABOY support 4Gb and on the backup server we have 2 fibre cards which are 8GB. The solution is designed to be fault tolerant so any faults should be alleviated with alternative paths.
We have approx 25 VMs which we backup using Veeam B&R v6. We have 1 Veeam job which kicks off at 8pm, we use the backup to SAN option but find the processing rate to be poor. It takes approx 40-50mb/s. On a good day this can get to about 65mb/s. All our VMs bar 2 have Change blocked tracking enabled so incremental backups should be faster.
The backup server fibre plugs into 2 SAN fibre switches (with redundant paths) and the ESX hosts also connect into these fibre switches. When Veeam backups and restore it should be using the fibre connectivity to directly connect to the SAN eliminating any host involvement to get better performance.
I am trying to understand why we are getting these bottlenecks because when we backup to tape we backup data which sits on the SATABoy which is directly connected to fibre to our Windows backup server and again we are getting poor performance from Backup Exec. The tape library is LTO-5 and we are using a 6GB SAS card which is in the backup server to connect to the tape library. We found when we upgraded from LTO-4 to LTO-5 the performance was exactly the same. Basically we are getting 4,300mb/s when with our design of our backup infrastructure we should be getting at least double this.
Things we have tried
• Copying the data we normally backup from the SATAboy to the backup server local disks which are 450GB SAS disks and find when we backup to tape from this location we are finding it’s a lot quicker takes half the time to backup to tape. The speed per minute doubles to about 8000 mb/s.
• Copying a file/folder from the SATABOY to local disk on the backup server and getting approx 60mb/s (this is through fibre!!). As a test we unplugged the Ethernet to ensure it wasn’t using Ethernet to copy this file and when we plugged the cables out the copy carried on which proved it is using the fibre connection.
• We have done a direct fibre connection from the SATABoy to the backup server and noticed a slight improvement but nowhere near what we expected. We basically achieved 115mb/s going directly with SAN Fibre switches.
• We have also updated the server firmware, all drivers. It’s an IBM server so we have used updateXpress to update all components of the server.
Things we are looking to try next
• Upgrade the firmware on the SATABOY
• Upgrade the firmware on the SAN fibre switches
• Upgrading the server to 64-bit Windows 2008 R2, install approx another 16gb ram and maybe another processor but ideally don’t want to do this as it’s a massive piece of work and we don’t see what significant gain we will get by doing this.
I am hoping someone who may have had similar problem like this can assist. I apoligise that in advance that its all not directly related to Veeam but believe if I can sort the issues out with Veeam and/or hardware, the rest will also get sorted.
Thank you
-
- Veeam Software
- Posts: 21133
- Liked: 2140 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Hello! What does the bottleneck statistics show for your job?
-
- Enthusiast
- Posts: 36
- Liked: never
- Joined: Feb 09, 2010 8:26 pm
- Full Name: Chad
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
It seems very telling that the tape jobs are twice as fast when copying from local SAS instead of from the SATABoy. SATA is slow, regardless of whether or not it is through fibre channel. We made a mistake one day that set our backups to go to our tier3 SATA storage instead of a mix of 15k and 10k SAS and it was 3-4 slower!
-Chad
-Chad
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jun 27, 2011 1:50 pm
- Full Name: ISExpress
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
The bottleneck statics for the whole job is approx 50mb/s and the bottleneck can vary from Target or Proxy. e.g. my last job the bottleneck was Target and had a total processing rate of 48mb/s.foggy wrote:Hello! What does the bottleneck statistics show for your job?
The previous date the overall processing date is 58mb/s and this time the bottleneck was proxy.
In veeam 6 we do 1 job which has 25 VMs. Would it worth splitting this job into 3/4 parts. Would that improve the overall processing rate.
Its very unlikely that I'm going to be able to buy a new storage with SAS disk. Surely you shouldnt need SAS disks for backups???
Could it be because we are using a 32bit OS? I would have thought the fibre connection to the backup server would have really helped keep the speed. Obviouslly I know SATA disk would not perform as higer as SAS but surely I should see better performance than 50mb/s?
thanks
-
- Veeam Software
- Posts: 21133
- Liked: 2140 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Is this performance of a full backup or incremental run? It would be decent for the full run but normally should be faster for incremental. I would like to see the performance rates for separate VMs (these are available in the realtime stats window if selecting each particular VM to the left) and whether all of them are processed using Direct SAN mode.
Also, you can check your backup proxy CPU load while backup is running, as bottleneck proxy says for the under-powered backup proxy. As for the target, note that backup method also plays a big role as reversed incremental backup puts 3x I/O load on target comparing to forward incremental.
Also, you can check your backup proxy CPU load while backup is running, as bottleneck proxy says for the under-powered backup proxy. As for the target, note that backup method also plays a big role as reversed incremental backup puts 3x I/O load on target comparing to forward incremental.
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jun 27, 2011 1:50 pm
- Full Name: ISExpress
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Hi this is performance when running a full backup. When we run an incremental run it seems to be about 100mb/s.foggy wrote:Is this performance of a full backup or incremental run? It would be decent for the full run but normally should be faster for incremental. I would like to see the performance rates for separate VMs (these are available in the realtime stats window if selecting each particular VM to the left) and whether all of them are processed using Direct SAN mode.
Also, you can check your backup proxy CPU load while backup is running, as bottleneck proxy says for the under-powered backup proxy. As for the target, note that backup method also plays a big role as reversed incremental backup puts 3x I/O load on target comparing to forward incremental.
I have noticed when a Veeam backup is running, CPU usage is normally high (between 80% -95%). We are running a reversed incremental (as we only want to create 1 file) as we backup this file to tape as well. We perform a full backup weekly every sunday. Inline data duplication is on and the compression is optimal and optimised for local target.
If I look at last nights incremental backup the overall processing rate was 92mb/s. the bottleneck was the target.
If i look at the processing mode for our 3 busiest servers (exchange, SQL and file server) the following stats are as follows:
I think its using direct san mode as it says (Using Source proxy VMware Backup proxy (san;nbd) just before it creates the veeam snapshot.
Thanks
-
- Veeam Software
- Posts: 21133
- Liked: 2140 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
I would note that forward incremental mode is a better choice for disk-to-disk-to-tape backup as it copies only incremental changes, not the full backup file, which takes less time and requires less tape. Also, it is much less stressful for the target storage (which is in your case a bottleneck).
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jun 27, 2011 1:50 pm
- Full Name: ISExpress
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
We have to change our tapes every day though and on that tape it needs to have a full backup. At the moment its great because we have 1 vbk file. If we did forward incremental am I right in assuming we would have a large vbk (active full backup) and then smaller files and if we ran the active full backup every sunday we would have 1 small incremental backup for each day (assuming we run this for 7 days).foggy wrote:I would note that forward incremental mode is a better choice for disk-to-disk-to-tape backup as it copies only incremental changes, not the full backup file, which takes less time and requires less tape. Also, it is much less stressful for the target storage (which is in your case a bottleneck).
I assume also we would need to append to the tape. We currently erase a tape and it fills it with the latest backup file.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
If you need to change tape everyday, and you have a sufficient backup windows, the best choice is to do a full active backup everyday. In this way every tape has a full backup in it, and when you need to restore, you only have to pick one tape.
Instead, if you can leave tape inside the tape loader for more than one day, full+incremental is still the best choice.
Luca.
Instead, if you can leave tape inside the tape loader for more than one day, full+incremental is still the best choice.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Yes, that's correct, but given that you need to have a full backup on each tape every day, Luca's advice seems like a way to go for me.ISExpress wrote:If we did forward incremental am I right in assuming we would have a large vbk (active full backup) and then smaller files and if we ran the active full backup every sunday we would have 1 small incremental backup for each day (assuming we run this for 7 days).
-
- Novice
- Posts: 3
- Liked: 1 time
- Joined: Feb 01, 2012 9:05 pm
- Full Name: chris hall
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Hope this helps, backup exec 2010 in a single job wont do more than about 4000/mb/min, what you need to do is create 4 separate jobs using the same tape target, separate your servers in to 4 groups(dont change the location just tick the first lot on in the first job created in backup exec and so on), browsing the the same folder each time, but selecting different servers. Start all 4 jobs backup job at exacly the sametime and you will get a combined through put of at least 15000/mb/min.
thanks
thanks
-
- Veteran
- Posts: 392
- Liked: 33 times
- Joined: Jul 18, 2011 9:30 am
- Full Name: Hussain Al Sayed
- Location: Bahrain
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Did someone make what cghall11 suggested and improve the d2tape backup performance?cghall11 wrote:Hope this helps, backup exec 2010 in a single job wont do more than about 4000/mb/min, what you need to do is create 4 separate jobs using the same tape target, separate your servers in to 4 groups(dont change the location just tick the first lot on in the first job created in backup exec and so on), browsing the the same folder each time, but selecting different servers. Start all 4 jobs backup job at exacly the sametime and you will get a combined through put of at least 15000/mb/min.
thanks
I'm doing the same, but it's through iSCSI Initiator. I have iSCSI SAN AX-5i EMC where the VMs sets. All the Diskpool are RAID5. VMs created on different LUNs and not more than 7 VMs in each LUN.
Veeam6 is installed on a VM on one of these LUNs. Backup Targets connected to Veeam VM is different SAN Storage with 2.7TB via iSCSI initiator. ESX Servers doesn't connect to the IBM.
Veeam6 connected to the EMC SAN Storage via iSCSI Initiator, just have backup processed through SAN. Backup processed speed it vary between VMs to another VMs 30-40 mb/s and incremental some time reach evern 1 Gb/s but some of them reach 300 mb/s.
Symantec Backup Exec agent is installed on the Veeam VM and backup process using Network. I can see the speed of the backup takes hell of time to complete.
Any idea?
Symantec Backup
-
- Veteran
- Posts: 392
- Liked: 33 times
- Joined: Jul 18, 2011 9:30 am
- Full Name: Hussain Al Sayed
- Location: Bahrain
- Contact:
Re: Poor performance from D2D2Tape backup utilising Veeam v6
Hi,
veeam v6 installed on a VM 64bit Ent with 2 vCPUs quoad core each and 8 GB memory.
Veeam is installed on one of the LUNs that comes from EMC AX4-5i and all the VMs resides on this SAN Storage.
1 LUN Target attached to the Veeam VM via iSCSI Initiator from EMC SAN and formatted as NTFS.
2 LUNs target attached to Veeam VM via iSCSI Initiator from IBM DS3500 SAN and formatted as NTFS.
1 LUN target from OpenFiler attached to the Veeam VM via iSCSI Initiator and formatted as NTFS.
Veeam VM has 4 vNICs connected to the iSCSI Portgroup on the iSCSI vSwitch in ESX.5.0 where all the iSCSI SAN Storage connected into the same Switches.
All the LUNs where the VMs resides are presented to the Veeam VM and they shown in the Disk Management.
ESX has two pNICs attached to the vSwitch and these two pNICs attached to the physical iSCSI Switches. MPIO is configured on ESX and one pNIC is active for the portgroup and second pNIC unused and vice versa for second portgroup and pNIC.
How can I increase the performance? I can see that performance is really slow. Is there a way or tricks to make it faster?
Thanks,
Hussain
veeam v6 installed on a VM 64bit Ent with 2 vCPUs quoad core each and 8 GB memory.
Veeam is installed on one of the LUNs that comes from EMC AX4-5i and all the VMs resides on this SAN Storage.
1 LUN Target attached to the Veeam VM via iSCSI Initiator from EMC SAN and formatted as NTFS.
2 LUNs target attached to Veeam VM via iSCSI Initiator from IBM DS3500 SAN and formatted as NTFS.
1 LUN target from OpenFiler attached to the Veeam VM via iSCSI Initiator and formatted as NTFS.
Veeam VM has 4 vNICs connected to the iSCSI Portgroup on the iSCSI vSwitch in ESX.5.0 where all the iSCSI SAN Storage connected into the same Switches.
All the LUNs where the VMs resides are presented to the Veeam VM and they shown in the Disk Management.
ESX has two pNICs attached to the vSwitch and these two pNICs attached to the physical iSCSI Switches. MPIO is configured on ESX and one pNIC is active for the portgroup and second pNIC unused and vice versa for second portgroup and pNIC.
How can I increase the performance? I can see that performance is really slow. Is there a way or tricks to make it faster?
Thanks,
Hussain
Who is online
Users browsing this forum: Google [Bot], Semrush [Bot] and 34 guests