Please contact our support team to troubleshoot technical issues as it is advised by the forum rules. Once you do this, please add your case ID to the thread, otherwise, this topic will be removed by a moderator.
Thanks for sharing the case ID. Keep working with support and they will guide your way to find a root cause.
From the logs, I see a connection failure around 11:45
|Der angegebene Netzwerkname ist nicht mehr verfügbar
|Failed to read data from the file [\\?\XXX\XXXXXXX\XXXXXXX\XXXXXXX\XXXXXXX\XX\XXXXXXXXXX.iso].
|--tr:Error code: 0x00000040
|--tr:Failed to read from block stream. Stream offset: [3972005888]. Stream size: [4715950080]. Block offset: [0]. Data size: [131072]
|--tr:Failed to read next chunk. StreamId: [Type: [1] Id: [0]]
which is most often caused by connectivity break between Veeam and source NAS.
Thanks for the fast answer. In which log could I find this. Basically what I mean the error I can see as a "normal" User in the log about corruption is a bit misleading in this case. Connection Break could well be.
That would be under your logs\Backup\[JobName]\Agent.XXXXXXX.log(full log name contains your job name and server DNS, so I discarded it here).
Anyhow, try running the job again meanwhile to see if it breaks again and wait till support engineer lands his word.
Mind me asking if this is a production setup or your lab? 10.0.0.4442 build is RTM version, GA build number is 10.0.0.4461. I've passed case details to RnD folks anyway for a review. Cheers!
I actually found a problem with the Network to the NAS being saturated - so I removed this problem. After that I was not able to start the job or retry - I always go an error as stated two post above - even on the Health check. The only option was to delete the Backup files from the job and then I could start it without any problem. It did run through now.
The only thing that ist left is a comment: The error message did not fi the error that was causing the problem.
Thanks for your help pointing me in the right direction.
Hi Trish, doing this usually does not guarantee the issue won't be back after some time... I would have our support review the debug logs, as they may have a patch for the actual issue behind this behavior. Most of these issues were fixed in 10a, but a few hot fixes were too late to make it there.