I have tried twice now and each time this is what I get:
Code: Select all
2/11/2015 10:19:25 AM Starting restore job
2/11/2015 10:19:26 AM Locking required backup files
2/11/2015 10:20:18 AM Starting restore agents on server "10.10.10.10"
2/11/2015 10:20:15 AM Queued for processing at 2/11/2015 10:20:15 AM
2/11/2015 10:20:15 AM Required backup infrastructure resources have been assigned
2/11/2015 10:20:17 AM Preparing next VM for processing
2/11/2015 10:20:17 AM Using target proxy VMware Backup Proxy [nbd]
2/11/2015 12:44:32 PM Error Powering off restored VM
2/11/2015 12:44:47 PM Restoring disk Hard Disk (500.0 GB) 62%
2/11/2015 12:44:48 PM Error Restore job failed Error: Client error: An existing connection was forcibly closed by the remote host
Cannot process [restoreDiskContent] command. Target descriptor is [vddk://<vddkConnSpec><viConn name="10.10.10.10" authdPort="902" vicPort="443" /><vmxPath vmRef="2" datacenterRef="ha-datacenter" datacenterInventoryPath="ha-datacenter" snapshotRef="2-snapshot-4" datastoreName="vm" path="server01.xxx.xxx/server01.xxx.xxx.vmx" /><vmdkPath datastoreName="vm" path="server01.xxx.xxx/server01.xxx.xxx
The only think "strange" about my operation is that the backup repository is stored on an external USB drive since we are kind of in a disaster recovery mode and needed to move this off, this was our only option.
Each time the failure occurs it is at 62%. Since it takes over 2 hours to get this far, I'm not thrilled about having to keep trying.
Right now, I am migrating data from the local veeam server to free up enough space in the local drives to copy this 500GB backup locally to veeam. This should eliminate any issue with the USB drive being a source of the problem.
We had another 250GB drive backed up and restored the same way without issue (though not on the USB), and about 60GB of other full VMs restore (*from* the USB drive).
I'm hoping someone can tell me what's up so when my 500GB is copied locally I can do whatever else might be necessary to minimize this issue. The network infrastructure is all very direct in this environment, and the fact that it timed out at 62% each time doesn't seem to be random.