-
- Veteran
- Posts: 261
- Liked: 29 times
- Joined: May 03, 2011 12:51 pm
- Full Name: James Pearce
- Contact:
vPower NFS Performance
In restoring a number of VMs for a test environment, I noticed that the vPower NFS service (note, under v5) isn't really exercising the hardware as much as it could. Concurrently moving 3 VMs from vPower-NFS to other host storage, I see average queue depth < 1, quite a bit of IO on the system volume (SQL server?), and heavy disk cache expansion. Also the IO sizes are averaging 42KB.
So really just some observations. I wondered if the NFS server implemented any (configurable?) read-ahead, whether there is some limit to its concurrency (nfsd threads, in Linux terms), whether it would make sense for preference to be given to SQL server RAM over the file cache, and whether the IO sizes could be configured to match the array stripe size?
Many thanks!
So really just some observations. I wondered if the NFS server implemented any (configurable?) read-ahead, whether there is some limit to its concurrency (nfsd threads, in Linux terms), whether it would make sense for preference to be given to SQL server RAM over the file cache, and whether the IO sizes could be configured to match the array stripe size?
Many thanks!
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
In the documentation they say the vPower NFS performance by Veeam is limited. But no numbers how much or in which way. I'm actually in touch with a Systems Engineer of Veeam to check on that and get some more informations on this as we plan to put 10Gbit NICs in our Veeam servers to improve performance on instant restores. We just don't know if it will make sense as the vPower NFS is limited. This is why i'm in touch with this engineer. If i got some more informations i will post it here. Or maybe someone from Veeam here can enlighten us
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
Seems that i maybe missunderstood something in the documentation as the answer by the Engineer is that there is no known performance limitation on vPower NFS performance. But still wainting on the final answer. Will report back here when i got it.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: vPower NFS Performance
What is meant in the documentation, is that vPower NFS is limited from the storage I/O perspective (no exact numbers as they would vary for different storages). So putting faster NICs will not help to improve IR, you'd rather consider using faster disks for that.
vPower NFS does not allow and was not designed to power on dozens of VMs at a time, but only the most critical ones (realistic number of decently running from vPower NFS VMs in most cases does not exceed 10). As we like to say, vPower NFS is like a "spare tire" in case of disaster, but not a basis for your DR strategy.
Some more considerations in these threads: vPower NFS performance with ssd (or high-end SAS-raid) and Planning my deployement? Any advice, some questions.
vPower NFS does not allow and was not designed to power on dozens of VMs at a time, but only the most critical ones (realistic number of decently running from vPower NFS VMs in most cases does not exceed 10). As we like to say, vPower NFS is like a "spare tire" in case of disaster, but not a basis for your DR strategy.
Some more considerations in these threads: vPower NFS performance with ssd (or high-end SAS-raid) and Planning my deployement? Any advice, some questions.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: vPower NFS Performance
Alexander is correct, vPower NFS is largely limited by the storage I/O, however, it is also not multi-threaded from the perspective that it will top out at a single CPU even when multiple CPUs are available. vPower NFS was not designed to be a high performance NFS server, but rather an acceptable performance solution that allows you to restore a VM quickly when a system is down. Of course, as with almost any technology, you can always stretch it beyond it's design, but at some point it will break.
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
Ah ok. Thanks for the info guys. Anyway we will put a 10Gbit NIC into our development Veeam server for testing purposes
Also my goal was never to Instant Restore as much VMs as possible at a time. Just trying to improve restore times of large (or VERY large) VMs.
Also my goal was never to Instant Restore as much VMs as possible at a time. Just trying to improve restore times of large (or VERY large) VMs.
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
Cokovic - any news on how the testing went? -I'm very interested in this as we have a customer with very large VM's that we would like to reduce the SvMotion times after instant recovery. - Thanks
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
Hi Paul,
not yet. But yesterday our two new physical Veeam servers arrived and i'm going to install them today (as far as i know both with 10Gbit NICs). So hopefully within the next few days i'm able to provide you with more info on that.
Cheers,
Haris
not yet. But yesterday our two new physical Veeam servers arrived and i'm going to install them today (as far as i know both with 10Gbit NICs). So hopefully within the next few days i'm able to provide you with more info on that.
Cheers,
Haris
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
Hi Haris
Exellent look forward to that. I'm having trouble in driving a single 1GB nic to more than 50% when using NFS on 6.1 but if I do a traditional restore (non NFS) I can push the link at 80% plus, unfortunately this has a massive impact when using instant recovery.
Cheers,
Paul
Exellent look forward to that. I'm having trouble in driving a single 1GB nic to more than 50% when using NFS on 6.1 but if I do a traditional restore (non NFS) I can push the link at 80% plus, unfortunately this has a massive impact when using instant recovery.
Cheers,
Paul
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
So far i've made exactly the same experience. Also there was a little difference if backup file was compressed or not (uncompressed was slightly faster). Will give 10Gbit a shot and come back soon with some results.
Cheers,
Haris
Cheers,
Haris
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
Haris,
I've also logged a call with support, I'll post what I get back.
thanks,
Paul
I've also logged a call with support, I'll post what I get back.
thanks,
Paul
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: vPower NFS Performance
As stated above, the primary factors for vPower NFS are disk I/O latency, network latency (between ESX host and vPower NFS Proxy, as well as between vPower NFS Proxy and Repository), and finally CPU horsepower. vPower NFS was not designed to deliver high throughput, but was designed to be your "spare tire" when things go wrong to allow a server to be brought online quickly and a reduced performance level.
If you want to get the absolute maximum performance out of vPower NFS you should backup to a physical server with fast local disk (15K SAS work great, but some SSD are even better) and the server should be both the repository and the vPower NFS proxy. That keeps the latency down to the minimum possible level. With many SAS I've seen vPower NFS deliver 60-70MB/sec, and with SSD I've seen that number approach saturation of the 1Gb link, but not much faster due to it's single threaded behavior.
Of course, if you're thowing all of this at your backup storage, and you really need that level of fast restores, then it's really time to start considering replication instead.
If you want to get the absolute maximum performance out of vPower NFS you should backup to a physical server with fast local disk (15K SAS work great, but some SSD are even better) and the server should be both the repository and the vPower NFS proxy. That keeps the latency down to the minimum possible level. With many SAS I've seen vPower NFS deliver 60-70MB/sec, and with SSD I've seen that number approach saturation of the 1Gb link, but not much faster due to it's single threaded behavior.
Of course, if you're thowing all of this at your backup storage, and you really need that level of fast restores, then it's really time to start considering replication instead.
-
- Veteran
- Posts: 295
- Liked: 59 times
- Joined: Sep 06, 2011 8:45 am
- Full Name: Haris Cokovic
- Contact:
Re: vPower NFS Performance
Thats exactly the setup i've got:tsightler wrote:As stated above, the primary factors for vPower NFS are disk I/O latency, network latency (between ESX host and vPower NFS Proxy, as well as between vPower NFS Proxy and Repository), and finally CPU horsepower. vPower NFS was not designed to deliver high throughput, but was designed to be your "spare tire" when things go wrong to allow a server to be brought online quickly and a reduced performance level.
If you want to get the absolute maximum performance out of vPower NFS you should backup to a physical server with fast local disk (15K SAS work great, but some SSD are even better) and the server should be both the repository and the vPower NFS proxy. That keeps the latency down to the minimum possible level. With many SAS I've seen vPower NFS deliver 60-70MB/sec, and with SSD I've seen that number approach saturation of the 1Gb link, but not much faster due to it's single threaded behavior.
Of course, if you're thowing all of this at your backup storage, and you really need that level of fast restores, then it's really time to start considering replication instead.
2x Intel Xeon E5640 2.67GHz
8GB RAM
Local repository with 13x 1TB 15k SAS harddisk
Restoring the largest VM we have (13.5TB) took 28hrs. As far as i can see from these facts i've saturated the GBit link to its max. And this is why i wanted to test it with a 10Gbit interface as it seems to be the bottleneck (dont know if i can call it a real bottleneck )
Sadly we've got the wrong servers delivered. So i have to wait some more weeks to be able to test with 10GBit.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: vPower NFS Performance
Yep, I would say you definitely saturated the link with 1Gb so I'll be interested to hear how far you pushed it with 10Gb. In the environment that I tested with SSD we had 10Gb but appeared to hit the saturation of the single CPU core at ~140-150MB/sec. That being said, your CPU cores are newer and faster than the ones I tested with, so it would be nice to see what happens.
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
I've done some basic tests on the storage performance on the local attached storage on the Veeam proxy, which is an MDS 600 fully loaded with 72 * 2TB SATA disks configured into 2 arrays, coping data from one array to the other I'm seeing between 450-500MB's so I don't think the bottle neck is the local storage well not for 1GB link, so not sure why the link isn't being pushed harder. I will do some more testing.
-
- Veteran
- Posts: 261
- Liked: 29 times
- Joined: May 03, 2011 12:51 pm
- Full Name: James Pearce
- Contact:
Re: vPower NFS Performance
As I said before my observations are that the vPower NFS service is absolutely single-threaded and probably has no read ahead and likely is performing IO entirely synchronously. Personally I'm not too fussed about powering on lots of VMs together but I would like faster migration from vPower to production; GbE speeds would be helpful.
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
If I carryout multiple IR's (storage vMotion) to different hosts I can drive the physical 1GB link on the proxy to the maximum throughput (100MBps) easily but a single IR (storage vMotion) I only 50-60 MBps max. I can do a full restore direct to the same host (non IR) which drives the link to the maximum throughput, so I know its a performance issue related to the proxy server or the overall NFS service at the proxy end.
-
- Service Provider
- Posts: 14
- Liked: 1 time
- Joined: Jul 30, 2009 4:08 pm
- Full Name: Paul Hardy
- Location: Cambridge
- Contact:
Re: vPower NFS Performance
Sorry - I know it's not a performance issue related to the proxy server doh!paulhardy wrote:If I carryout multiple IR's (storage vMotion) to different hosts I can drive the physical 1GB link on the proxy to the maximum throughput (100MBps) easily but a single IR (storage vMotion) I only 50-60 MBps max. I can do a full restore direct to the same host (non IR) which drives the link to the maximum throughput, so I know its a performance issue related to the proxy server or the overall NFS service at the proxy end.
Who is online
Users browsing this forum: No registered users and 15 guests