By "Disk export" in the title and going forward in this post, I'm referring to the "Export content as virtual disks..." function.
I've stumbled across some pretty weird behavior (at least as I see it) and would like a better answer than I can get from support. I might be missing something incredibly obvious.
We have a pretty basic environment.
- A single B&R server which functions as little more than the central management console for the couple backup admins (of which I'm one) and of course, kicking off the jobs per the scheduled settings.
- A few vCenter servers in separate remote sites (relative from the B&R server), each with their own vmware backup proxy hot-add appliances/VMs (windows based)
- A handful of Nutanix AHV clusters, each with their own AHV proxies
- A couple SOBRs (separation of business division/location), performance extents are immutable/hardened repos, capacity tiers are Azure blob
I backed up a VM from an AHV cluster into a veeamzip. Out of curiosity (and as a nice restore test), I wanted to see what it would be like to restore the VM to an ESXi cluster. Now of course, the VM won't necessarily boot when the virtualized hardware changes but that's OK - I just want to see if Veeam can do it. I've done basically this exact thing before, except restoring from a normal backup job, not a veeamzip (the latter of which shows as an exported backup as expected).
Now, with an AHV VM you can't just restore that directly to ESXi which is fine, you only get the disk export option (well technically you also have instant recovery but I'm not into that). That's fine, I've done this before, I understand the implications. So I click through the wizard to restore the VM disk to my ESXi host and path to the folder (I also specify the exact proxy I want to use, which is the correct proxy for the given ESXi host). What happens next confuses me - the job fails. It fails with the below (portions redacted):
Code: Select all
DATE TIME Error NFC storage connection is unavailable. Storage: [stg:datastore-73361,nfchost:host-71523,conn:FQDN-OF-VCENTER-SERVER]. Storage display name: [REDACTED-DATASTORE].
Failed to create NFC upload stream. NFC path: [nfc://conn:FQDN-OF-VCENTER-SERVER,nfchost:host-71523,stg:datastore-73361@temp/VeeamRestore_REDACTED-VM-NAME_xladof4nqvj.vmx]. Error: NFC storage connection is unavailable. Storage: [stg:datastore-73361,nfchost:host-71523,conn:FQDN-OF-VCENTER-SERVER]. Storage display name: [REDACTED-DATASTORE].
Failed to create NFC upload stream. NFC path: [nfc://conn:FQDN-OF-VCENTER-SERVER,nfchost:host-71523,stg:datastore-73361@temp/VeeamRestore_REDACTED-VM-NAME_xladof4nqvj.vmx].
- This communication didn't come from my backup proxy (hotadd appliance VM). It came from the B&R server. This B&R server is not in the same physical site as the vcenter server/ESXi hosts it is trying to communicate with. The B&R server itself was trying to talk to the ""vcenter"" server directly, as opposed to having the backup proxy do this communication. Having the proxy makes the most logical sense. The communication it was doing is over port 902. The backup proxies are usually responsible for all such traffic to ESXi hosts whether it's a backup job, a restore job, etc. But for whatever reason in this particular use case, that doesn't happen.
- The logs mention the FQDN of the vcenter server, but this simply wasn't the case from our firewall logs. Our firewall logs showed the B&R server trying to reach port 902 on the ESXi hosts themselves. This seems to me to be a bug, failure in logging, or if I'm being charitable - "technically correct but badly communicated". At least for us, we try to keep access to the ESXi hosts very limited. We don't even want the B&R server talking directly to ESXi hosts, and my understanding was that this is one of the (many) reasons to have backup proxies - let them bridge this control gap, and keep security easier to control, and throughput between ESXi hosts and proxies fast in case hotadd fails and you have to fallback to NBD.
Maybe my view is wrong/technically impossible, but I view the B&R server itself as the configuration/management/orchestration system only. There shouldn't* be any requirements on the B&R server to also act as the "data" plane. All those functions should be delegated by the B&R server to proxies/mount servers/gateway servers/etc.
* Exception of course is in small environments where those functions/roles/proxies are explicitly installed on the B&R server.
I'd really appreciate an answer to this because it's kind of annoying to have what appears to be an inconsistency with how the product operates, and needing such a firewall exception.