-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Permance issues over Fibre Channel
Hi all,
I wondered if anyone had any guidance on a performance issue I am having.
I have a Veeam Physical server (Windows 2008 R2)- DL370 Dual quad with 40gb ram - dual HBA FC card (8gb). This connects via 2 Brocade SAN switches to our production Blade ESX4 systems (Dual quad core BL460C x 4, 32gb mem). For storage we use a FC attached HP EVA4400 with FC drives in.
I logged a support call ID#5168877 for an issue with the system routing backups over the network as opposed to Fibre channel. The backups were running at about 35mb/s and caused our main file server to grind to a halt in the morning when people started logging in!
This issue was solved last night by applying the latest patch to our Veeam Backup server and now backs up over the Fibre Channel network. last night I got a peak of 115mb/s and an average of around 50-80mb/s on the backup of a few of our production servers which are sized around the 40gb area. However when it came to our File server it still ran at 30mb/s even though it was now going over the FC and not the network. The storage for our File server is the same EVA4400 as we use for everything else. I did notice one or two of our other VM's were also backing up slightly slowly as well - around 12mb/s. There was little work going on at the office and minor load on the network.
I'm new to Veeam and wondered if anyone had any guidance as to what could be causing this? I initially put the File server backup performance down to the fact is was running over the network, however the system seemed to grind to a halt even when it was backing up over the FC, resulting in a rather irate management team!
Many thanks
Russell
I wondered if anyone had any guidance on a performance issue I am having.
I have a Veeam Physical server (Windows 2008 R2)- DL370 Dual quad with 40gb ram - dual HBA FC card (8gb). This connects via 2 Brocade SAN switches to our production Blade ESX4 systems (Dual quad core BL460C x 4, 32gb mem). For storage we use a FC attached HP EVA4400 with FC drives in.
I logged a support call ID#5168877 for an issue with the system routing backups over the network as opposed to Fibre channel. The backups were running at about 35mb/s and caused our main file server to grind to a halt in the morning when people started logging in!
This issue was solved last night by applying the latest patch to our Veeam Backup server and now backs up over the Fibre Channel network. last night I got a peak of 115mb/s and an average of around 50-80mb/s on the backup of a few of our production servers which are sized around the 40gb area. However when it came to our File server it still ran at 30mb/s even though it was now going over the FC and not the network. The storage for our File server is the same EVA4400 as we use for everything else. I did notice one or two of our other VM's were also backing up slightly slowly as well - around 12mb/s. There was little work going on at the office and minor load on the network.
I'm new to Veeam and wondered if anyone had any guidance as to what could be causing this? I initially put the File server backup performance down to the fact is was running over the network, however the system seemed to grind to a halt even when it was backing up over the FC, resulting in a rather irate management team!
Many thanks
Russell
-
- VP, Product Management
- Posts: 27325
- Liked: 2778 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Permance issues over Fibre Channel
Is it the first run of the backup job? What rates do you have while running incremental pass with CBT enabled?russwatkins wrote:However when it came to our File server it still ran at 30mb/s even though it was now going over the FC and not the network.
The storage for our File server is the same EVA4400 as we use for everything else. I did notice one or two of our other VM's were also backing up slightly slowly as well - around 12mb/s.
Btw, what does the bottleneck statistics show for your job?
-
- Veeam Software
- Posts: 21128
- Liked: 2137 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Permance issues over Fibre Channel
Also, keep in mind that various processing speed for different VMs is expected.
-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Re: Performance issues over Fibre Channel
Thanks for your replies.
I can understand certain VMs taking longer than others, however the only reason I can see for the network method being the same as the FC method on our Fileserver is a bottleneck at the source (ie: the EVA). Could it be a fragmentation issue? We have 15k FC drives so they should be able to handle Veeam!
At current rates it will likely take nearly a day to back up our fileserver - is this correct?
I can understand certain VMs taking longer than others, however the only reason I can see for the network method being the same as the FC method on our Fileserver is a bottleneck at the source (ie: the EVA). Could it be a fragmentation issue? We have 15k FC drives so they should be able to handle Veeam!
At current rates it will likely take nearly a day to back up our fileserver - is this correct?
-
- VP, Product Management
- Posts: 27325
- Liked: 2778 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Permance issues over Fibre Channel
I would say - yes, as backup job performance depends on many variables such as VM size, fragmentation inside the Guest OS, source and target performance etc.russwatkins wrote:At current rates it will likely take nearly a day to back up our fileserver - is this correct?
Please wait till the end of the backup job and post back your bottleneck statistics that would clearly show us what might be the problem.
-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Re: Permance issues over Fibre Channel
Hi,
Thank you very much for your help on this everyone.
I've looked in to this problem and noticed the auto fragment job that was set up has not run for over a year - grr! Consequently file fragmentation is up at 60%.
I'm going to run some tests on other VMs that I know have file fragmentation and see if there is a difference with backup speed and see what happens.
Thank you very much for your help on this everyone.
I've looked in to this problem and noticed the auto fragment job that was set up has not run for over a year - grr! Consequently file fragmentation is up at 60%.
I'm going to run some tests on other VMs that I know have file fragmentation and see if there is a difference with backup speed and see what happens.
-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Re: Perfomance issues over Fibre Channel
Just thought I would update to say that I will be going live with these backups in the next few weeks and will update then.
-
- Chief Product Officer
- Posts: 31707
- Liked: 7212 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Permance issues over Fibre Channel
OK, good luck!
-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Re: Permance issues over Fibre Channel
Ok,
I've been live with my backups now for a few weeks and thought I would share my experiences.
I couldn't get my backup throughput above 40mb/s and was at a loss at to why this was. It was when I tried to back up to my mirrored system drive when the penny finally dropped. The system drive is set up on slow 5.4k sata disks, and I got a backup throughput of over 80mb/s! Compared to the SAS 10k drives in raid 6 that I had set up this was quite a shock. Defragmenting the drive made a difference but nowhere near what was expected.
The issue turned out to be the strip size - I experimented and found that the optimal size for my system was 64k. I now get throughput of around 140mb/s which is superb!
I hope this helps others to tune their systems.
I've been live with my backups now for a few weeks and thought I would share my experiences.
I couldn't get my backup throughput above 40mb/s and was at a loss at to why this was. It was when I tried to back up to my mirrored system drive when the penny finally dropped. The system drive is set up on slow 5.4k sata disks, and I got a backup throughput of over 80mb/s! Compared to the SAS 10k drives in raid 6 that I had set up this was quite a shock. Defragmenting the drive made a difference but nowhere near what was expected.
The issue turned out to be the strip size - I experimented and found that the optimal size for my system was 64k. I now get throughput of around 140mb/s which is superb!
I hope this helps others to tune their systems.
-
- VP, Product Management
- Posts: 6027
- Liked: 2855 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Permance issues over Fibre Channel
Can you share with us what your stripe sizes were previously? And what performance you were seeing from it then?
-
- Influencer
- Posts: 19
- Liked: 1 time
- Joined: Jan 27, 2012 11:37 am
- Full Name: Russell Watkins
- Contact:
Re: Permance issues over Fibre Channel
I am running a smart array P800 HP SAS card with 1tb SAS drives - the standard stripe size is 16k for Raid 6 and this is how I initially set things up. However I couldn't get more than 40mb/s backup speed out of the system. After pulling my hair out I finally found the solution shown above. I forgot to add that I also have the controller set at 25% read cache, 75% write cache.
The P800 supports up to 256k stripe size, however any bigger than 64k seemed to drop the performance back a little again, although it was still better than having the standard 16k block soze.
The P800 supports up to 256k stripe size, however any bigger than 64k seemed to drop the performance back a little again, although it was still better than having the standard 16k block soze.
-
- VP, Product Management
- Posts: 6027
- Liked: 2855 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Permance issues over Fibre Channel
Thanks for the excellent information. How many drives did you have in the array? Typically larger stripe sizes will lower the IOP load on the underlying disk. For example, with a stripe size 16K, and the Veeam default of 1MB blocks, a single reverse incremental block will require 3MB of I/O (this does assume no compression, which is not likely, but worst case). That's 3072K/16K or 192 IOPS. If you have 12 drives in the RAID, that's 16 IOPs per drive. That's about 10-12% of a 10K SAS drive and we've only moved a single Veeam block, it won't take much to top out.
On the other hand, if you increase the stripe size to 64K, you can retrieve the data with one-quarter of the IOPs since each I/O will move 64k instead of 16K. I'd expect to see roughly 4x the performance from the larger block size and it looks like you're pretty close to that.
On the other hand, if you increase the stripe size to 64K, you can retrieve the data with one-quarter of the IOPs since each I/O will move 64k instead of 16K. I'd expect to see roughly 4x the performance from the larger block size and it looks like you're pretty close to that.
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 23 guests