-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Veeam Backup Server iSCSI speed issues
Right now I'm benchmarking our Veeam Backup server iSCSI speeds. I'm using a 120gig SQL vm for my tests and I'm consistently seeing around 27-32 MB/s on all of my full backups.
Here's my setup:
Veeam B & R 5.0.2.224 x64
3 x Lefthand iSCSI storage arrays
1 X Dell Powerconnect 5424
1 x Veeam Backup server, HP dl120 (xeon x3460 @ 2.8GHz, 8 gig of RAM, 2 dedicated 1gig NICs for iSCSI MPIO with the Lefthand DSM for MPIO installed, Jumbo Frames enabled, 4 x 2TB Raid5 local sata drives)
For the backup job I'm using the default settings (Direct SAN Access, Compression: Optimal, Storage: local target, application-aware and system indexing are disabled).
I tried the recommendations on these threads, http://forums.veeam.com/viewtopic.php?f ... 534#p20534 and http://forums.veeam.com/viewtopic.php?t=5126&p=19540
From what I've been seeing on the forums I "should" be seeing somewhere around 60-90 MB/s on my backup speeds. Anyone got any ideas on what I can do to improve my speed?
Here's my setup:
Veeam B & R 5.0.2.224 x64
3 x Lefthand iSCSI storage arrays
1 X Dell Powerconnect 5424
1 x Veeam Backup server, HP dl120 (xeon x3460 @ 2.8GHz, 8 gig of RAM, 2 dedicated 1gig NICs for iSCSI MPIO with the Lefthand DSM for MPIO installed, Jumbo Frames enabled, 4 x 2TB Raid5 local sata drives)
For the backup job I'm using the default settings (Direct SAN Access, Compression: Optimal, Storage: local target, application-aware and system indexing are disabled).
I tried the recommendations on these threads, http://forums.veeam.com/viewtopic.php?f ... 534#p20534 and http://forums.veeam.com/viewtopic.php?t=5126&p=19540
From what I've been seeing on the forums I "should" be seeing somewhere around 60-90 MB/s on my backup speeds. Anyone got any ideas on what I can do to improve my speed?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
You might try disabling, uninstalling the MPIO as there have been quite a large number of reports of performance issues when using various MPIO solutions with vStorage API.
Are you sure you are using Direct SAN mode and the system is not failing over to network mode?
Are you sure you are using Direct SAN mode and the system is not failing over to network mode?
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
I tried uninstalling the MPIO and used a single 1gig NIC instead. Still get the same amount of throughput.tsightler wrote:You might try disabling, uninstalling the MPIO as there have been quite a large number of reports of performance issues when using various MPIO solutions with vStorage API.
Verified that the backup job is using Direct SAN mode and NBD is disabled.tsightler wrote:Are you sure you are using Direct SAN mode and the system is not failing over to network mode?
-
- Novice
- Posts: 9
- Liked: never
- Joined: Oct 25, 2011 4:48 pm
- Full Name: Trey
- Contact:
Re: Veeam Backup Server iSCSI speed issues
I am having the exact same issue, and actually have a ticket open with support. There only suggestion is I need add another CPU to my backup server . The server has two quad core cpus....it isn't even touched.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
So what happens if you run two jobs simultaneously, do you get a similar speed from both jobs or is the bandwidth split evenly between the two jobs?
What version of Windows are you running?
What version of Windows are you running?
-
- Novice
- Posts: 9
- Liked: never
- Joined: Oct 25, 2011 4:48 pm
- Full Name: Trey
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Server 2008R2
I am running another Direct Access job at another site, and the same exact same thing happens. The jobs are not running at the same time or on the same backup server, but the same thing happens on both of my veeam backup boxes.
I am running another Direct Access job at another site, and the same exact same thing happens. The jobs are not running at the same time or on the same backup server, but the same thing happens on both of my veeam backup boxes.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Right, but I'm wanting to figure out if there's some inherent limit in how fast you can process data. In many cases storage arrays are not optimized to provide full performance for low queue depth sequential reads, which is generally what happens when running a full backup. In other words, if a single job gives ~30MB/sec, and two jobs give ~60MB/sec, then the problem is obviously related to the throughput of a single, low queue depth sequential read and not some issue with hardware or the network. Due to this issue our older SATA Equallogic arrays were very difficult to get much past 40-50MB/sec "per-job", but we could run three jobs and get 120-150MB/sec fairly easily, using 2 load-balanced 1Gb iSCSI adapters.
SAN/iQ versions prior to 8.5 were known to be fairly poor performers when it came to low queue depth sequential reads. Versions from 8.5 and later had tweaks to improve the performance of this scenario, but only by 10-25%.
http://h30507.www3.hp.com/t5/Around-the ... ba-p/80710
SAN/iQ versions prior to 8.5 were known to be fairly poor performers when it came to low queue depth sequential reads. Versions from 8.5 and later had tweaks to improve the performance of this scenario, but only by 10-25%.
http://h30507.www3.hp.com/t5/Around-the ... ba-p/80710
-
- Novice
- Posts: 9
- Liked: never
- Joined: Oct 25, 2011 4:48 pm
- Full Name: Trey
- Contact:
Re: Veeam Backup Server iSCSI speed issues
I am running two jobs now on my backup server, and it is still doing the same thing. 3 MB/s and 7MB/s
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Something is seriously wrong with the environment, worst storage I have is software iSCSI target built on some very old PC and located a few switches away from my desktop - even that always pushes at least 22 MB/s. Less than 10 MB/s with proper iSCSI SAN sound like failover to 100Mb link somewhere in iSCSI network or something.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Your issues are significantly worse than the user that started this thread. 3-7MB are both horribly bad. I get better speeds than that on my laptop lab, which is reading and writing to the same laptop harddrive using the Microsoft iSCSI Target (modified to install on Windows 7). I would think there almost has to be some type of duplex mismatch or negotiation issue in your environment.thavener wrote:I am running two jobs now on my backup server, and it is still doing the same thing. 3 MB/s and 7MB/s
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
***Update***
Opened a ticket with Veeam support and they sent me a excerpt from one of their best practices (see below),
---Major Difference!
The question I raise now is what can I do to improve my write speeds but without using costly "Enterprise" level storage such as the Lefthand?
Opened a ticket with Veeam support and they sent me a excerpt from one of their best practices (see below),
Source of the issue is the write speed for the local storage on the Veeam Backup Server (HP dl120 xeon x3460 @ 2.8GHz, 8 gig of RAM, 4 x 2TB Raid5 local sata drives). Ended up creating a 1 TB Test LUN on our Lefthand cluster (15k SAS drives) and went from 27-32 MB/s on local storage to 130 MB/s on the Lefthand.Symptom Backup and/or replication rates are slower than typical. Tips apply to situations where basic infrastructure issues have been ruled out.
Problem Backup and/or replication rates are slower than typical in version 5.x of Veeam Backup & Replication.
Cause These tips address a variety of causes.
Solution There are many reasons that jobs may not have the performance you desire.
While not all of them may apply in your case, here are some tips to help improve performance:
*Select the best job type.
Job type affects how we read data and is declared at the job, not global, level.
If the Backup and Replication computer has access to SAN fabric, select SAN mode.
If the backup server is a VM you may want to try changing the job to "Virtual Appliance" mode. This mode avoids reading from the network to retrieve data. However, the host that is running the Backup & Replication VM must be able to see the source VM's VMDKs for this option to work.
If none of the above options can be used, select Network mode.
*Check compression level and block size.
If you increase the compression and block size you will be making a tradeoff between CPU cycles and amount of data to push to the target location. You can increase the compression level by:
1. Open the Backup & Replication console
2. Open the jobs view
3. Right click on the slow job and select properties
4. Click next three times until you reach the "Backup Destination" screen
5. Click on the "Advanced” button
6. Open the "storage" tab
7. Select a different compression level as well as a different block size by changing the "storage: optimize for" value. While here, make sure the "Enable inline deduplication" checkbox is selected
8. Press “ok” and “next” through the rest of the properties
*Prevent bulky, uneeded OS files from being backed up.
If you are using Windows you can move the page file for the computer to a seperate VMDK specifically for the page file. This seperate disk can then be excluded from job, thus preventing it from being backed up. Another thing you may want to try is disable hibernation and restart the VM to flush the hiberfil.sys file and save HD space equal to however large the RAM is on that VM.
*Defragment the VM from within the operating system, and then run sdelete on the OS.
You can find sdelete here:
http://technet.microsoft.com/en-us/sysi ... 97443.aspx
Sdelete works by writing zeros to the disk where "deleted" data resides. In order for this tip to have an effect on job speeds, inline deduplication needs to be enabled (inline deduplication will not copy zero MB blocks over the network).
The command you want to run would be:
sdelete -c DIRECTORY
where DIRECTORY is a drive letter or a folder that has data removed from it frequently.
After running sdelete you can perform a full backup and track the speed of the backup. Incremental and full backups should benefit greatly from these actions, especially if you have a heavily fragmented disk. If you do not do a full backup after running sdelete you will have an extremely large transfer (we would be tracking the unused "1"s that have been changed to "0"s), but then subsequent backups should be faster.
*If the performance is still not up to your expectations, the bottleneck test below can be used to help isolate where the slowdown is occuring.
First, disable change block tracking under the vSphere tab in the advanced options of a job (this is not required if you are running a VI3 environment). It is best to choose a small guest so the test is faster and more accurate. After that, run a full backup, and then immediately after the full backup has completed, start the job once more to perform an incremental backup.
Monitor the data throughput/performance in the summary of the job during both the full and incremental backups. If the speeds are about the same for both runs, then data retrieval from your datastores is likely the bottleneck. If the incremental is significantly faster than the full, then write speed is the likely culprit.
Also, if using a SAN-based data retrieval over fiber via VCB or vStorage API, make sure that your HBA drivers are up to date.
More Information
Check the network connection to the destination storage/host.
Try uploading a file to the target datastore using both FastSCP (the directory browser in Backup and Replication) and the vSphere datastore browser? Depending on how large the file is and how long it took for each program you can see if it is a network bottleneck. If you are going over a WAN connection try uploading from both across the WAN and at the destination's local LAN.
Also, monitor the performance of the destination host during a job. See if RAM, CPU, or the network connection on the host is being heavily used.
---Major Difference!
The question I raise now is what can I do to improve my write speeds but without using costly "Enterprise" level storage such as the Lefthand?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Check your RAID controller setting, if you use one (write-back vs. write-through)... I'd say controller is the only thing left to blame here, because single "green" (5400rpm) 2TB consumer grade hard drive in my home desktop does 90MB/s sequential writes (just checked).
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
I agree with Anton, you really have to focus on the RAID controller. If you don't have a battery-backed RAID with write-back then performance for writes on RAID5 will be pretty abysmal, especially if your not using a good RAID controller with a stripe size that makes sense. If you don't have write-back caching then you'll definitely want to disable "last access time" updates for NTFS.
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Anton and Tom thanks for the insight.
Here's the controller card that I'm using in the server (HP smartarray 212, http://h18000.www1.hp.com/products/serv ... index.html)
I'll take a closer look at my controller configs and let you know what I find out.
Here's the controller card that I'm using in the server (HP smartarray 212, http://h18000.www1.hp.com/products/serv ... index.html)
I'll take a closer look at my controller configs and let you know what I find out.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Right, so the 256MB BBWC (battery backed write cache) is a required option for RAID5. Verify that you have this option installed and that it is enabled and configured for write-back.
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Thanks Tom I checked the controller settings and found the write cache was not enabled. I can adjust "array accelerator (cache) ratio" and currently I have it set to 0% Read and 100% Write.tsightler wrote:Right, so the 256MB BBWC (battery backed write cache) is a required option for RAID5. Verify that you have this option installed and that it is enabled and configured for write-back.
I have the "array accelerator" enabled on the local drive the VBKs are being saved to. Currently testing things out right now so I'll keep you updated.
Any other recommendations?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam Backup Server iSCSI speed issues
It's really hard to tell as different controllers behave differently. I'd probably suggest leaving at least some small amount to the read cache, but it's up to you. Setting read cache to 0% can cause low performance during restores, especially file level restores, at least on some controllers. Also, assuming you're using reverse incremental backups, or performing synthetic fulls, having 0% read cache could potentially slow performance of those operations significantly. Everything in life is a balance.
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Thanks again for the feedback Tom. I've discovered that a 50-50 Read/Write balance is probably the best way to go right now.
Right now I'm testing different combinations of the array accelerator to see what works best (enabled on both OS and Veeam backup partitions, etc.)
After that it's on to tweaking the NIC settings. I'll keep you guys up to date.
Right now I'm testing different combinations of the array accelerator to see what works best (enabled on both OS and Veeam backup partitions, etc.)
After that it's on to tweaking the NIC settings. I'll keep you guys up to date.
-
- Veteran
- Posts: 391
- Liked: 39 times
- Joined: Jun 08, 2010 2:01 pm
- Full Name: Joerg Riether
- Contact:
Re: Veeam Backup Server iSCSI speed issues
...did a little research: HP calls this controller "entry level product", it´s the cheapest raid controller i found in their portfolio.
Could you afford the version with 1 gig cache? i found it here: http://h18000.www1.hp.com/products/serv ... index.html
But you also have to check the pcie port and speed the card is located. I saw funny things in the past, some x16-needed-controllers put in x8 slots or even worse - that point has also to be taken care of. as this is extremely important when using dual port 10gb nics (intel x520 da2 for example), it is even more important for your high performance local raid system.
best regards,
Joerg
Could you afford the version with 1 gig cache? i found it here: http://h18000.www1.hp.com/products/serv ... index.html
But you also have to check the pcie port and speed the card is located. I saw funny things in the past, some x16-needed-controllers put in x8 slots or even worse - that point has also to be taken care of. as this is extremely important when using dual port 10gb nics (intel x520 da2 for example), it is even more important for your high performance local raid system.
best regards,
Joerg
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Thanks for the research Joerg! I'll keep you posted.
Yeah...The server that we're currently using for processing and storing our Veeam Backups is kind of a "beginner" server since we just deployed Veeam a few months ago. We're figuring out what requirements need to be met for the long term with Veeam. When version 6 comes out-which I can't wait for-that will add more things (proxy, repository servers, etc.) to take into consideration for our backup design.
The whole journey of this thread has been trying to figure out what the ballpark performance statistics that we should expect out of Veeam BR. We all know that direct SAN connection with a dedicated server offers the best in performance. However having some basic fundamental performance guidelines or performance queues to let us "the customer" know if we're headed in the right direction when trying to figure this stuff out. The excerpt the Veeam tech. support is great stuff and is something that should be included in the Veeam BR resources section along with any other juicy tidbits you can offer.
All of you guys (joergr, tom, gostev, rickvanover, vmdoug) are doing a great job! I love how Veeam has employees that take part in their user community. It's one of the major reasons why Veeam as a company is doing so well and plus you guys make a KICK ASS product!
Yeah...The server that we're currently using for processing and storing our Veeam Backups is kind of a "beginner" server since we just deployed Veeam a few months ago. We're figuring out what requirements need to be met for the long term with Veeam. When version 6 comes out-which I can't wait for-that will add more things (proxy, repository servers, etc.) to take into consideration for our backup design.
The whole journey of this thread has been trying to figure out what the ballpark performance statistics that we should expect out of Veeam BR. We all know that direct SAN connection with a dedicated server offers the best in performance. However having some basic fundamental performance guidelines or performance queues to let us "the customer" know if we're headed in the right direction when trying to figure this stuff out. The excerpt the Veeam tech. support is great stuff and is something that should be included in the Veeam BR resources section along with any other juicy tidbits you can offer.
All of you guys (joergr, tom, gostev, rickvanover, vmdoug) are doing a great job! I love how Veeam has employees that take part in their user community. It's one of the major reasons why Veeam as a company is doing so well and plus you guys make a KICK ASS product!
-
- Veeam Vanguard
- Posts: 238
- Liked: 55 times
- Joined: Nov 11, 2010 11:53 am
- Full Name: Ian Sanderson
- Location: UK
- Contact:
Re: Veeam Backup Server iSCSI speed issues
Not the same, but just to give some insight into the last Veeam install I setup with speeds.
Veeam server: Repurposed Dell PE 2950, quad core CPU, 16GB ram
Backup target: Iomega PX4 in raid 5
This set up has 2 1 GIG nics set up to use MPIO in the server, I was seeing 60MB's on 3 concurrent jobs with CPU hitting 95%. This was 60MB's per job, not in total. Network utilisation was hitting 90% on both NIC's
Ian
Veeam server: Repurposed Dell PE 2950, quad core CPU, 16GB ram
Backup target: Iomega PX4 in raid 5
This set up has 2 1 GIG nics set up to use MPIO in the server, I was seeing 60MB's on 3 concurrent jobs with CPU hitting 95%. This was 60MB's per job, not in total. Network utilisation was hitting 90% on both NIC's
Ian
Check out my blog at www.snurf.co.uk
-
- Influencer
- Posts: 16
- Liked: never
- Joined: Jan 07, 2011 8:50 pm
- Full Name: Jeromy Hensley
- Contact:
Re: Veeam Backup Server iSCSI speed issues
*Quick Update*joergr wrote:...did a little research: HP calls this controller "entry level product", it´s the cheapest raid controller i found in their portfolio.
Could you afford the version with 1 gig cache? i found it here: http://h18000.www1.hp.com/products/serv ... index.html
Ordered a smart array p812 controller for the server. Should be in the next couple weeks so I'll send out another update after it's been installed to let you know how much it improved things.
Who is online
Users browsing this forum: Bing [Bot], Semrush [Bot] and 80 guests