Comprehensive data protection for all workloads
Post Reply
kmccubbin
Influencer
Posts: 17
Liked: never
Joined: Apr 10, 2009 7:27 pm
Full Name: Kelly McCubbin
Contact:

Windows 2008 Multipathing (mpio) support?

Post by kmccubbin »

Reading a lot of stuff on these boards about disabling multipathing for increased speed, I was wondering if that meant that best practices for Veeam were to uninstall (or never install in the first place) Windows 2008's MPIO feature?
I'm trying to track down where my particular performance hit is and, while having it installed or not seems to make very little difference, I'd be happpy to hear if there were an official recommendation.
Thanks.
Gostev
Chief Product Officer
Posts: 31707
Liked: 7212 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by Gostev »

Hi Kelly, reportedly VCB has issues with quite a few multipathing implementations, so yes, it is best to disable multipathing - unless the software you use is explicitly mentioned in VCB documentation as fully supported (which as I recall is not the case for Windows MPIO).

In order to track down where performance hit is in case of VCB, best is to create new job with powered off VM, and perform two runs (first will be full, and second will be incremental but there are no changes to process). Then, compare the results:
1. First run is slow, second run is slow too: bottleneck is VCB data retrieval speed.
2. First run is slow, second run is fast: bottleneck is target storage speed.

Hope this helps!
kmccubbin
Influencer
Posts: 17
Liked: never
Joined: Apr 10, 2009 7:27 pm
Full Name: Kelly McCubbin
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by kmccubbin »

THanks. That has been my testing methodology for a few days. MPIO is uninstalled and the second fiber card is physically unplugged. So Windows shows only one path.
With a relatively small VM, about 20 Gb, the first backup runs from 27 - 37 and the second from 50 - 57. This is from a Compellent storage array at it's highest tier of disks. I have also tried a much larger VM (about 820Gb) and that peaks out, on the incrementals, at about 47.
I have tried two different targets. The first is an MD1000 array attached to a DELL Perc 6E card on the backup server. The second is local storage in the backup server.
This just seems slow to me. I think I must be missing something.
kmccubbin
Influencer
Posts: 17
Liked: never
Joined: Apr 10, 2009 7:27 pm
Full Name: Kelly McCubbin
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by kmccubbin »

I know some of this hass been covered before, but I'm just trying to run down where the log jam is.
So, in Compellent, the options are not Write-Back and Write-Through Cache, but are "Write Cache", "Read Cache" and "Read Ahead". Anyone have any experience with the best way to set these?
Also, all of the LUNs are attached to the backup server as Read-Only. Is that hurtuing my performance?
Thanks.
Gostev
Chief Product Officer
Posts: 31707
Liked: 7212 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by Gostev »

Read-only should not be an issue, since VCB only reads the data...
dlove
Influencer
Posts: 18
Liked: never
Joined: May 15, 2009 1:51 pm
Full Name: darren
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by dlove »

kmccubbin wrote:I know some of this hass been covered before, but I'm just trying to run down where the log jam is.
So, in Compellent, the options are not Write-Back and Write-Through Cache, but are "Write Cache", "Read Cache" and "Read Ahead". Anyone have any experience with the best way to set these?
Also, all of the LUNs are attached to the backup server as Read-Only. Is that hurtuing my performance?
Thanks.
I would call compellent support to ask them. In my wonderful clariion world of cx's (wish i had compellent)... it only has the lame selection of enabling read and write cache along with the page size. It then allows you to spread the 8GB memory per SP however you would like (depends on business needs) so you can dedicate x amount to read and x amount to write. The only time I see "write-back" is when dealing with local server raid controllers. Every sever that I've come across always has had write-back enabled on the raid controller by default. Multipathing is supported with vcb but has a very limited amount of qualified equipment. I personally don't multipath but I lunmask the vmware luns on the proxy because it's easier to control and I prefer not to let the OS handle the decision making by disabling stuff via device manager...

btw you only want them as RO and I hope you are disabling the diskpart automount feature built into the OS prior to mounting your vmware luns!
kmccubbin
Influencer
Posts: 17
Liked: never
Joined: Apr 10, 2009 7:27 pm
Full Name: Kelly McCubbin
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by kmccubbin »

I think I may have misunderstood. For some reason I thought that the recomendation was to enable Write-Back cache on the VMWare LUNs, but I gather that, actually, it's the target storage we're talking about. In my case the Write-Back Cache on the MD1000 is enabled. So that's not the issue.
So this leaves me still hanging. I did Gostev's suggestion of just running a straight VCB backup from the command-line and, indeed, the results are about the same. My VMLuns on the Compellent device are attached by 4Gb fiber to the backup server. Muy targets are fast MD1000s attached to the PERC 6/i card in my Poweredge 2950. The processor and memory are coming nowhere NEAR maxing out.

So why do my backups, full or incremental, never crest 60?
Gostev
Chief Product Officer
Posts: 31707
Liked: 7212 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by Gostev »

Kelly, from the testing results it sounds like source storage speed is the bottleneck, not the target storage. What SAN storage are you using? Is it FC or iSCSI, and what is the link speed (FC1/2/4/8 or 1Gb iSCSI)?
tsightler
VP, Product Management
Posts: 6027
Liked: 2855 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by tsightler »

Gostev wrote:Kelly, from the testing results it sounds like source storage speed is the bottleneck, not the target storage. What SAN storage are you using? Is it FC or iSCSI, and what is the link speed (FC1/2/4/8 or 1Gb iSCSI)?
I've heard you make this claim several times but this just doesn't hold water. I'm lucky to get 30-40MB/sec on individual backups via iSCSI, but I can easily get 100MB+ with simple file copies and, when running multiple backups simultaneously I can sustain 150MB/sec, if I was really limited by my source storage running multiple backups wouldn't help. While I have no way to prove this, I believe that VCB doesn't use a very high queue depth and this limits the performance when reading from arrays that have average latency. My guess is that arrays that either do very aggressive read-ahead, or have very low read latency, perform much better than arrays with average latency.
Gostev
Chief Product Officer
Posts: 31707
Liked: 7212 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by Gostev »

Yes, by "source storage speed" I meant VCB data retrieval speed and not the actual SAN speed. I wanted to find our exact make and model of the SAN so that I can verify if we have received similar complaints from some other users.
tsightler
VP, Product Management
Posts: 6027
Liked: 2855 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by tsightler »

OK, I misunderstood, sorry about that.

The VCB speed does seem to be limited by many factors, the memory bandwidth performance of the actual backup host seems to be a pretty good bottleneck as well. We are currently running our backups on a server that uses Intel 5160 processors so they're fairly old, but they are 3Ghz. We recently ran a test with a newer host that uses 5450 processors which have nearly twice the memory bandwidth, and VCB performance was nearly twice as fast. We were amazed at how much difference this made since the backup host only indicates about 35-50% CPU usage even on the slower processor box, but the difference was quite significant. Just something else to consider.
Gostev
Chief Product Officer
Posts: 31707
Liked: 7212 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by Gostev »

Tom, you are certainly the first to find this out... great info. Could you please clarify if you are talking about VCB performance standalone (from command line), or in conjunction with Veeam Backup? I assume, the latter - Veeam Backup does quite a lot of in-memory data processing. What are exact numbers (before/after) you are seeing?
tsightler
VP, Product Management
Posts: 6027
Liked: 2855 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by tsightler »

I will perform more testing, but in our initial test a VM which backed up consistently at ~35MB/s on the old host had performance of ~60MB/sec on the new host. Storage connectivity was the same, a Qlogic iSCSI HBA connected to an Equallogic PS400E. I just realized there was another difference between the systems that I didn't think about, one system was Windows 2003 64-bit, while the other was 32-bit, so I guess it's possible that this had something to do with it as well. We plan to perform additional testing and I'll post more info, but our initial impression is that the host can certainly have a significant impact on overall backup performance even if the CPU's are not showing 100% busy.
kmccubbin
Influencer
Posts: 17
Liked: never
Joined: Apr 10, 2009 7:27 pm
Full Name: Kelly McCubbin
Contact:

Re: Windows 2008 Multipathing (mpio) support?

Post by kmccubbin »

Just for an update... I discovered that the QLogic Fiber adapters we were using only ran up to 2Gb/s. Our Fiber switches supported better and so we got new cards that went to 4. That improved our speed only slightly. Our best speeds went from 60 to around 80. I then found updated drivers which boosted, in my first few tests, us up another 10 or so. I'm currently testing on a really large VM to see if the performance looks better or worse. Will let you know.
Post Reply

Who is online

Users browsing this forum: No registered users and 38 guests