This started happening a few days ago, jobs (specifically, VM's in the jobs) randomly failing w/ "Error: Failed to open VDDK disk [blah-blah-etc. enter the VM disk name/path here, failed to open disk for read, failed to upload disk." I understand this to indicate Veeam thinks these existing drives have suddenly become IDE HDD's? Which they haven't. They're plain, straight-forward SCSI HDD's. It happens randomly and sporadically. The same job/VM might run fine next time. Tomorrow, it'll happen elsewhere.
These are existing jobs that were working about as well as they can in our environment. Regarding this particular issue, this just started happening. VM's added through vCenter, not through a host. We're all clustered/SAN so source/Veeam storage should all be commonly accessible, otherwise we'd know we have bigger issues. We're currently running under only one vCenter and one data center w/in VMware environment.
VMware 6.0 3247720 (CBT fix applied previously, this started happening some days/weeks after), VBR 8.0 2084.
Veeam B&R v9
Dell TL2000 via PE430 (SAS)