I was expecting it to work like everything else in Veeam. I setup a Linux server and add it to the managed server list, then I deploy the Proxmox worker role to it. To my surprise, that's not at all how it worked. It's odd, but I persisted.
My setup is a VBR server behind a NAT connecting to proxies on various leased hosts on the internet. When using the bog standard Linux managed server there is the "Run server on this side" option to make sure VBR always connects to the proxy instead of the other way around, but as discussed Proxmox Workers are not managed Linux servers, and this option is not present. No problem, I figured. Veeam has always operated using DNS names so I'll just make sure I have set up a public DNS record for the VBR server name and forward the appropriate ports.
When that didn't work I started monitoring traffic from the worker only to discover it's trying to connect to the VBR servers internal IP address. So, instead of resolving the name like _every other part of Veeam does_, this particular connection acts more like a FTP server and passes the IP for the reverse connection. That's... terrible. Why isn't it resolving the VBR servers name to find the correct IP to use? I'm at a wall. To get past this I'd need to do customized NAT rules under Proxmox to forcefully rewrite the destination to the correct IP which would be a management nightmare.
This whole setup is such a major departure from how Veeam operates. It loses all of the flexibility and power of the architecture. Please tell me a more standard deployment scenario is in the works. This is not something I can use either personally or professionally as it stands.
