Discussions specific to the VMware vSphere hypervisor
Post Reply
lohelle
Service Provider
Posts: 77
Liked: 15 times
Joined: Jun 03, 2009 7:45 am
Full Name: Lars O Helle
Contact:

vPower NFS performance with ssd (or high-end SAS-raid)

Post by lohelle » Jan 15, 2012 4:08 pm

How does vPower NFS (instant recovery) scale with backup storage performance?

If I had the latest 6-7 days of backup on SSD storage, and then moved it to SATA after that, what kind of performance could I expect when restoring/powering on 15-20 terminal servers (Server 2008 R2 with 10-15 medium heavy usage).

Server would be an existing Dual 8-core opteron server with 16 SSD's in raid 10 (LSI 9265-8i + SAS2 expander), and the secondary storage on the same server would be the same type of controller + 16 SATA-drives (different SAS Expander).

I guess the main question is: What would be the primary bottleneck on FAST SSD + 16-core Opteron + 10G network?

Gostev
SVP, Product Management
Posts: 24818
Liked: 3574 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: vPower NFS performance with ssd (or high-end SAS-raid)

Post by Gostev » Jan 15, 2012 10:55 pm

lohelle wrote:I guess the main question is: What would be the primary bottleneck on FAST SSD + 16-core Opteron + 10G network?
That is some nice hardware there, I tell you! Would love to hear what performance and experience you will get! My guess is that the bottleneck will be either vPower data processing engine, or the network latency... hard to say!

By the way, if you are running v6, you may want to get vPower NFS hotfix from our support. Apparently, the v6 NFS service is so chatty as far as it logs that it may even affect its performance.

tsightler
VP, Product Management
Posts: 5425
Liked: 2246 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: vPower NFS performance with ssd (or high-end SAS-raid)

Post by tsightler » Jan 15, 2012 11:03 pm

For Instant Restore you would want to make sure that your PowerNFS cache is also on the SSD's. For Surebackup, you'd want to use a VMFS datastore that's on the SSD. I suspect you may hit some limits to PowerNFS scalability as my experience indicates that it's doesn't make very good use of multiple processors. That being said I think that performance in this circumstance could still be pretty good. While I normally recommend only 4-5 servers running from vPower NFS, when backed by fast storage I've managed to see decent performance with 10-12 VMs, and that was just fast SAS disk. I'd suspect SSD's would be even better.

As Anton mentioned latency would likely also play a part. The work the PowerNFS does to read a compressed/deduplicate block is significant. How much I/O do you TS systems perform once they're booted?

lohelle
Service Provider
Posts: 77
Liked: 15 times
Joined: Jun 03, 2009 7:45 am
Full Name: Lars O Helle
Contact:

Re: vPower NFS performance with ssd (or high-end SAS-raid)

Post by lohelle » Jan 16, 2012 6:16 pm

Thanks for the repiles.

The storage is actually ordered as a node for Starwind Iscsi SAN (HA-mode, 36-disk Supermicro Chassis + expanders), but I want to test it as a Veeam backup server first.
If the performance is great (ssd for NFS/backups) using Instant Recovery we might use a simular server for Veeam.

I guess I just have to test! :)

nsimao
Veeam Software
Posts: 40
Liked: 2 times
Joined: Oct 18, 2011 3:47 am
Full Name: Nelson Simao
Contact:

Re: vPower NFS performance with ssd (or high-end SAS-raid)

Post by nsimao » Mar 19, 2014 11:17 pm

tsightler wrote:For Instant Restore you would want to make sure that your PowerNFS cache is also on the SSD's. For Surebackup, you'd want to use a VMFS datastore that's on the SSD. I suspect you may hit some limits to PowerNFS scalability as my experience indicates that it's doesn't make very good use of multiple processors. That being said I think that performance in this circumstance could still be pretty good. While I normally recommend only 4-5 servers running from vPower NFS, when backed by fast storage I've managed to see decent performance with 10-12 VMs, and that was just fast SAS disk. I'd suspect SSD's would be even better.

As Anton mentioned latency would likely also play a part. The work the PowerNFS does to read a compressed/deduplicate block is significant. How much I/O do you TS systems perform once they're booted?
Great post Tom, interested in more information on this that might help with an potential larger customer at the moment that is wanting to understand the scalability requirements for powering on 1000 VMs (20% would be large file servers in the range of 2-3TB, the remaining 80% on average 100GB per VM). I know it's really hard to throw numbers around, and I was hoping they would at least test it a bit further in their environment, but they are wanting some guidance at this stage.

All their backup storage would be SSD, and wanting an idea of the architectural recommendation to meet this scale for recovery.

We have proposed two different solutions, vPower and replicas, and they are keen on vPower for the moment due to the savings on storage, but of course a combination of both is most likely the best approach.

My question is, are there any benchmark figures I could provide them on the number of VMs that they could power on using a single vPower NFS server?

From your post Tom, it sounds like 15 or more VMs would not be unrealistic, but of course hard to say without actual testing.

If I go back to them with the number of vPower NFS servers and repository resources they would need, I think that's what they ultimately want to know.

Post Reply

Who is online

Users browsing this forum: No registered users and 9 guests