I'm currently running 9.5 U1 (will be updating once we get another couple of CPU licenses I'm waiting on procurement for)
I have a need to transport a couple of VMs to another site so was looking to utilise VeeamZip as the easiest mechanism but I've had an interesting performance issue that I wasn't expecting so thought I'd run it by here first.
Site A: VBR server (VM), local vCenter, local VMs, local physical repo(sitary)/tape server, SAN-attached.
Site B: Local vCenter, local VMs, local physical repo/tape server, SAN-attached, managed by VBR on site A
100Mb P2P link between the two sites
So, from the console of VBR server in Site A, I VeeamZip a VM running in Site A to the SAN-attached repo server in Site A, destination is a "local" (SAN) drive on the repo server so everything is local. As expected for my environment I get an average rate of around 156MB/s and quite happy.
When I repeat the same process for Site B (again, from VBR server in Site A that manages it) I try to VeeamZip a VM hosted in Site B, managed by vCenter in Site B, which picks the SAN-attached repo/tape server in Site B and send it again to local disk in Site B but this time I was getting a network throttling rule kicking in for some reason, even though there should be no rules impacting this process as the only rules we have are sites A-B and sites B-C (site C not involved in this one).
So, I could see that throttling was set to 6MB/s and I know I have two rules configured to 6MB/s between sites A/B so I disable the first one & retry with no change (basically just blocking out the hour I'm currently working during). So I cancel, block out the same hour in the 2nd 6MB/s throttling rule, and this time there's no throttling rule being applied but the backup performance is terrible, at around 15MB/s which is as if it's flowing across the network.
In addition to this, whilst I was monitoring the job when both throttling were disabled, the console of the VBR server became VERY unresponsive, eventually complaining about SQL server availability (local SQL instance) before coming back to life in a couple of mins or so.
So does this sound like expected behaviour? (I'd say that the unresponsiveness at least is not!) If not I'll look to get a case opened...
[New Sig: PLEASE get GFS tape support for incrementals!!!]