- Posts: 81
- Liked: 2 times
- Joined: Jan 27, 2010 2:25 pm
- Full Name: Allan Nelson
We just found a bug with VEEAM which occurs under the following scenario (and I quote)...
"This bug appeared earlier than 4.1.2, and hasn't been fixed in version 5. This issue comes to light only when the job is configured via VC, the backed up machine has snapshots and one of its disks is located on a different NFC storage apart from the main configuration files, so coincidence of all these factors causes this failure."
The vast majority of our VMs are set up to have different disks on different volumes (as per best practice).
I've had 2 offers of workarounds...
1) Check there are no snapshots prior to the backup running (right!)
2) Add each ESX host (we have 5) to the VEEAM console, by IP address, create new jobs and add the 90 odd VM's to these jobs through the newly added stand-alone ESX hosts.
As we run DRS/HA on the cluster then option 2 would be pretty useless. If a VM moved from 1 host to another then it would fail.
No info yet on when it might be fixed.
- Product Manager
- Posts: 25891
- Liked: 2417 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
Yes, that's a known issue, you're correct, and this refers to the situations when all of the conditions below are met at the same time:
1. Virtual disks are located on the NFS storage apart from the main configuration files
2. vCenter Server connection is used to connect to VMs
3. VMs have snapshots
4. NFC protocol is used to connect to source hosts (agentless method)
As I see correctly see from your post, you do have ESX hosts in your environment, so please specify service console credentials for your hosts in the Veeam console as a workaround, that would deploy small agents in the COS. Also please note that specifying SSH credentials should give you much better job performance rates.