How does vPower NFS (instant recovery) scale with backup storage performance?
If I had the latest 6-7 days of backup on SSD storage, and then moved it to SATA after that, what kind of performance could I expect when restoring/powering on 15-20 terminal servers (Server 2008 R2 with 10-15 medium heavy usage).
Server would be an existing Dual 8-core opteron server with 16 SSD's in raid 10 (LSI 9265-8i + SAS2 expander), and the secondary storage on the same server would be the same type of controller + 16 SATA-drives (different SAS Expander).
I guess the main question is: What would be the primary bottleneck on FAST SSD + 16-core Opteron + 10G network?
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
-
- Chief Product Officer
- Posts: 31793
- Liked: 7295 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: vPower NFS performance with ssd (or high-end SAS-raid)
That is some nice hardware there, I tell you! Would love to hear what performance and experience you will get! My guess is that the bottleneck will be either vPower data processing engine, or the network latency... hard to say!lohelle wrote:I guess the main question is: What would be the primary bottleneck on FAST SSD + 16-core Opteron + 10G network?
By the way, if you are running v6, you may want to get vPower NFS hotfix from our support. Apparently, the v6 NFS service is so chatty as far as it logs that it may even affect its performance.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: vPower NFS performance with ssd (or high-end SAS-raid)
For Instant Restore you would want to make sure that your PowerNFS cache is also on the SSD's. For Surebackup, you'd want to use a VMFS datastore that's on the SSD. I suspect you may hit some limits to PowerNFS scalability as my experience indicates that it's doesn't make very good use of multiple processors. That being said I think that performance in this circumstance could still be pretty good. While I normally recommend only 4-5 servers running from vPower NFS, when backed by fast storage I've managed to see decent performance with 10-12 VMs, and that was just fast SAS disk. I'd suspect SSD's would be even better.
As Anton mentioned latency would likely also play a part. The work the PowerNFS does to read a compressed/deduplicate block is significant. How much I/O do you TS systems perform once they're booted?
As Anton mentioned latency would likely also play a part. The work the PowerNFS does to read a compressed/deduplicate block is significant. How much I/O do you TS systems perform once they're booted?
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: vPower NFS performance with ssd (or high-end SAS-raid)
Thanks for the repiles.
The storage is actually ordered as a node for Starwind Iscsi SAN (HA-mode, 36-disk Supermicro Chassis + expanders), but I want to test it as a Veeam backup server first.
If the performance is great (ssd for NFS/backups) using Instant Recovery we might use a simular server for Veeam.
I guess I just have to test!
The storage is actually ordered as a node for Starwind Iscsi SAN (HA-mode, 36-disk Supermicro Chassis + expanders), but I want to test it as a Veeam backup server first.
If the performance is great (ssd for NFS/backups) using Instant Recovery we might use a simular server for Veeam.
I guess I just have to test!
-
- Veeam Software
- Posts: 67
- Liked: 3 times
- Joined: Oct 18, 2011 3:47 am
- Full Name: Nelson Simao
- Contact:
Re: vPower NFS performance with ssd (or high-end SAS-raid)
Great post Tom, interested in more information on this that might help with an potential larger customer at the moment that is wanting to understand the scalability requirements for powering on 1000 VMs (20% would be large file servers in the range of 2-3TB, the remaining 80% on average 100GB per VM). I know it's really hard to throw numbers around, and I was hoping they would at least test it a bit further in their environment, but they are wanting some guidance at this stage.tsightler wrote:For Instant Restore you would want to make sure that your PowerNFS cache is also on the SSD's. For Surebackup, you'd want to use a VMFS datastore that's on the SSD. I suspect you may hit some limits to PowerNFS scalability as my experience indicates that it's doesn't make very good use of multiple processors. That being said I think that performance in this circumstance could still be pretty good. While I normally recommend only 4-5 servers running from vPower NFS, when backed by fast storage I've managed to see decent performance with 10-12 VMs, and that was just fast SAS disk. I'd suspect SSD's would be even better.
As Anton mentioned latency would likely also play a part. The work the PowerNFS does to read a compressed/deduplicate block is significant. How much I/O do you TS systems perform once they're booted?
All their backup storage would be SSD, and wanting an idea of the architectural recommendation to meet this scale for recovery.
We have proposed two different solutions, vPower and replicas, and they are keen on vPower for the moment due to the savings on storage, but of course a combination of both is most likely the best approach.
My question is, are there any benchmark figures I could provide them on the number of VMs that they could power on using a single vPower NFS server?
From your post Tom, it sounds like 15 or more VMs would not be unrealistic, but of course hard to say without actual testing.
If I go back to them with the number of vPower NFS servers and repository resources they would need, I think that's what they ultimately want to know.
Who is online
Users browsing this forum: Baidu [Spider], yu_ooyama and 65 guests