Last week I was working with the support (Evgeniya) to recover a file from a Sharepoint infrastructure and I encountered a strange behaviour I forgot to speak about before closing the ticket.
To sum up, I had a backup job with excluded disk but I had not ticked the checkbox "Remove excluded disk from configuration" so I was unable to get my surebackup job running to recover the file from Sharepoint (virtual lab was failing because of the VM with excluded disk not being able to boot). The support gave me the trick to get it working (I was totaly sure you would save my day):
- VM with no excluded disks in the virtual lab
- VM with excluded disk in instant recovery with no automatic boot in order to change the configuration
For the story, this worked perfectly and I was able to recover the file, the end user (my client) was so happy he told me Veeam was its best investment of 2011

During the recover process, I needed to restart the jobs multiple times and I found out the following thing:
if you run surebackup job and instant recovery simultaneously with VMs from the same backup job, if you stop one the two running jobs you will lose the other one because NFS will go into a strange state. On VMWare, all the VMs running through the NFS share will become stale. Disk latency for those VMs will go to 20 000ms.
I did the test 4 times (trying to first stop the surebackup job, then stopping the instant recovery and reverse), behaviour was the same each time, NFS latency stuck at 20 000ms.
Known bug or very bad day ?