Host-based backup of VMware vSphere VMs.
Post Reply
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

UNC path option for vPowerNFS

Post by george@itb »

Hi Folks,
I opened a support call 03517176 regarding error “invalid root folder” trying to point vPowerNFS to a CIFS share. Veeam support informed me that current vPowerNFS configuration does not allow for UNC path selection pointing to a network share? I thought this was possible, seems to recalling previous versions… But even if I am wrong, my question is why NOT? VPowerNFS designed to be temporary cache storage for those scenarios when we perform items restores or VM instant restores. So here is a very realistic scenario – a DR situation where I need to start a large number of critical VMs from the same backup repository and those VMs now need to write changes to a vPowerNFS location. My point, at today’s dynamics of VMs deployment and lifecycle – it's hard to know how many of the VMs will be identified as “critical” and how long those VMs need to be running as such. Therefore we don’t know how much of the actual space will be required, for example in 24-48-72 hours. To accommodate changed data, we may need 100 GB in 8 hours or 1 TB in 72 hours, while waiting for a production storage replacement. In that case, you can see how it can be a nightmare to assign in advance a fixed disk of “unknown” sizing for this purpose, and definitely NOT on the primary storage as a location. That means the location of vPowerNFS destination must be flexible and elastic for the expansion on the fly. PS. Not always ISCSI is an available option.

Two questions here,
1. Is a UNC really not an option for vPowerNFS?
2. if so what Veeam is suggesting to handle instant restore load for vPowerNFS in a dynamic environment (read "dynamic/unknown" number of VMs, with unknown rate of data change and questionable time span to run from a backup repository?

Thanks!
George
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: UNC path option for vPowerNFS

Post by HannesK »

Hello,
using an UNC path makes no sense from a performance perspective. The vPower service is hosted on the mount server. You want every write IO fast - and not going somewhere over SMB protocol back and forth :-)

Are you aware that in most situations the vPower cache location is not even used because people want better performance? usually the write cache is set to production datastore in the instant recovery configuration (see screenshot below)

Image

Or: local SSD. And if you expect 1TB change rate, then I would put 2TB SSD as write cache in the mount server.

And just a note: for a huge number of VMs, we recommend "Replication" instead of "Instant recovery" running hundreds of VMs with IVMR is not the best idea from a performance perspective :-)

Best regards,
Hannes
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: UNC path option for vPowerNFS

Post by Andreas Neufert »

I think it is pretty simple, we need block storage on the same server where our vPOWER NFS runs, so that our functionallity work.

Even if this would not be true, using CIFS for this is a bad idea as you would add 2x network latency to the IO path and would get a bad performance just because of this.
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: UNC path option for vPowerNFS

Post by george@itb »

Thanks for the reply folks, I respect everyone's opinions, but I was not discussing performance points here. For the discussion sake, In the old days, yes, it will be slow. But with a proper hardware/networking specs today is another story. Its way more cost effective to use a NAS (with CIFS or NFS) using 10Gb-40Gb networking for that purpose. We could even use NFS client if it makes easier for a discussion. In case of a true disaster, the "performance" factor for end users is not always a critical one, and most businesses understand that. Replication is not always cost-effective since a unified backup repository allows us also store longer periods of historical data, not only the latest copy. The historical data availability and quick restore options are critical points here. In our case, we have a storage for the backup repository, and it is powerful enough to serve as a vPowerNFS mount point, why can't we simply be able to choose that option?
This is a simple scenario – a customer has only Storage array and Backup Array. If the Storage arrays die, it is expected that critical VMs should run from a backup array, using instant restore option, while waiting for a "multi-day" process of getting a storage replacement. Isn't it what Veeam software should be providing with the options it sells?
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: UNC path option for vPowerNFS

Post by HannesK »

I mean really slow. It would get unusable slow. :-)

SMB share as repository is the worst option anyway. So adding even slower performance by introducing write cache over networks really makes no sense for the end user who needs to work with that setup in the end.
Gostev
Chief Product Officer
Posts: 31457
Liked: 6647 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: UNC path option for vPowerNFS

Post by Gostev »

While performance may be OK for certain use cases (can be argued), I would be more concerned about reliability aspects and data corruptions which SMB/CIFS stack is famous for on non-CA shares. Running your production workload off of such stack even temporarily is really a bad idea, especially under high load conditions of the real DR situation.
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: UNC path option for vPowerNFS

Post by george@itb »

Thanks for the reply Gostev, I don’t mean to debate this, just to note that since SMB3 and multi-channel support through many storage projects I involved with, I never saw those issues anymore. In fact, some of my large scale enterprise customers even in the financial sector are using SMB3 (and up) for some quite intense operations, with reliability and performance which is quite surprising, and I have seen many platforms, architecting enterprise level backup and storage solutions since the late 90s. Of course, if you have a different data, that could be a good point as well. Sometimes customers, after spending major $$$ on upgrades for primary systems, are looking for an ability to cut costs on secondary systems and a simplicity in operations that allows recouping some of those costs down the road. Thank you for your considerations.
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: UNC path option for vPowerNFS

Post by george@itb »

Veeam folks, I came across this article while working on another project, I thought it may be a good example to bring to this subject, just fyi. You may want to take a look at this info, just the first couple pages and also pages 6, 10, rest is vendor specific. But it will give you some ideas what SMB share performance and reliability can deliver (Windows OS as a client) with a correct architecture these days: SMB 3.0 Multichannel - Accelerate SMB 3.0 Performance for Applications by Netapp, https://www.netapp.com/us/media/tr-4740.pdf.
Cheers.
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: UNC path option for vPowerNFS

Post by HannesK »

I agree that performance with multichannel and proper storage (Netapp) is getting better.

In general, a Netapp FAS with SMB is not a typical Veeam repository from a price / value perspective (Netapp customers usually go for E-Series as repository). Well, maybe if someone uses an SMB share on a MetroCluster as a repository, then I see it as a valid use case. But that's a rare situation. For most other SMB storages, this will not work (reliable) because of their implementations of the SMB protocol.
jmmarton
Veeam Software
Posts: 2092
Liked: 309 times
Joined: Nov 17, 2015 2:38 am
Full Name: Joe Marton
Location: Chicago, IL
Contact:

Re: UNC path option for vPowerNFS

Post by jmmarton »

Also, if the NAS supports iSCSI, that likely will yield better performance than using SMB/CIFS. You could use in-guest iSCSI on the mount server to connect to the NAS. Most enterprise-grade NAS support iSCSI, so why not leverage that instead?

Joe
george@itb
Influencer
Posts: 14
Liked: never
Joined: Apr 16, 2019 9:39 pm
Full Name: George Lavrov
Contact:

Re: UNC path option for vPowerNFS

Post by george@itb »

Thank you, folks, for the suggestion. But it seems that my point is not getting across. I'll do the last post on this subject and let's just leave it at that. This request was not about getting "better" performance. It's not about technology advantages. It's about a less problematic experience during true disaster recovery. Yes, block-level typically outperforms file-level connectivity. I am not arguing this point. I am simply explaining, that today, if properly architected, file-level connectivity can deliver more then sufficient performance, but (also sometimes) can remove unwanted "gotchas" during highly stressful occurrences. For example, in those cases where data growth size is "unknown" - block-level lun and then file system (usually) must be watched over and grow manually. But some storage technologies allow file-level volumes to grow automatically, on demand. Part of my job as a DR architect to design solutions that are simple and can be executed regardless of "dynamically changing" factors - data growth, personnel changes, lack of documentation or internal knowledge of processes, etc. Using vPowerNFS on CIFS (properly architected) can simplify some of the tasks during DR in my opinion. Getting this function to perform adequately to the environment will be part of the design process. This was one area I saw Veeam can improve upon, catching up to latest enterprise trends for protocol performance - like SMB3 with multichannel support (not just a Netapp array feature - this comment is a short reply to HannesK last post above.) Thank you all for the interesting discussion.
Post Reply

Who is online

Users browsing this forum: albertwt, Google [Bot] and 29 guests