I understand not wanting to ignore it, but it's difficult to see how Veeam support could be of much help. Veeam is just a heavy filesystem consumer, yes, we generate a lot of I/O, but we don't have any insight into the hardware underneath, our knowledge stops at OS level system calls.
Now, I could theorize why you might see the issue with v11 vs prior versions. In v11 we switched to using unbuffered I/O, which, as it's name implies, bypasses to OS buffer cache. This generally improves performance for systems with hardware controllers with good buffers (i.e. commercial/enterprise grade hardware), but also puts significantly more stress on the storage subsystem vs buffered I/O. Also, as always, every version includes performance optimizations, so it's possible v11 is simply stressing the I/O more than prior versions. You could potentially even try the UseUnbufferedAccess=0 registry key to go back to v10 I/O behavior and see if that changes anything.
Earlier I asked if the message was always with the same enclosure/device or if it's always different devices. If it's the same one all the time (I'm assuming it's not) then I'd suspect the specific device might be having an issue. Otherwise it seems more likely to be a bus timeout issue which could be triggered by load. Regardless, it's difficult to see how anyone other than the storage support engineers could help solve the issue as the Veeam workload is just the catalyst.
In my case, the messages happened to random devices/drives, always during heavy I/O, but not with much consistency. Sometimes it would go a few days without a message, other times I'd get multiple messages in an hour, always on a random drive, sometimes dozens or even hundreds in a night. I tried new drive firmware, latest controller firmware, even tweaks to timeout settings in the LSI BIOS, but I never managed to impact the messages in any measurable way. I eventually just ignored them and it's been operating that way for years.
Another question, do you have any SSDs in this setup at all? Just curious.
For my case I was able to produce the issue by using the iozone benchmark in throughput mode with many parallel tasks, usually something like:
This is only 8 parallel threads, each reading/writing a 2G file, but you can tweak those. The -r 512K is the record size, which matches an average Veeam block size, and the -I parameter tells it to use direct I/O, similar to how v11 works. Try that at 99 (or 50) tasks and see how it works, although you will probably want to shrink the file size (-s) if you don't want to wait forever.
These are difficult problems to solve for sure, so I wish you luck and I hope you keep us updated on what you find.