-Daily incrementals are regular old forward incrementals from a weekly active full. I am aware that 5400RPM 8TB drives are not particularly performant. This is irrelevant since they had worked well enough before.
-See above
-Aside from the write caching on the disk, I would be very surprised if the array did any significant caching on its 2GB of ram. Also, this was working well enough before regardless.
-B&R is a VM on a vSphere cluster, the Repo WAS a docker container on a Synology array. I've taken the advice of support and installed a new repo, a 2.4Ghz 8 core 48GB Ubuntu server that accesses the array via NFS. See the included image for how that's working out.
-Nothing about the jobs was changed. Especially not the things I can't change since they're only available to enterprise customers.
-Yes, as I have said several times, everything worked well enough in v8.
As mentioned above, I installed a new linux repository and switched the backups over to use it. It is backed by the same storage, only now it is accessed through NFS by a 2.4Ghz 8 core 48GB ram Ubuntu server. This server does nothing but run as the Veeam repo. So, since talking to support, I have added a proxy (4 core 3.4ghz 16GB ram), and this thing, in addition to the original B&R server, basically more than tripling the compute resources of the Veeam infrastructure. Sadly, this has not at all had the intended effect:
As you can see, the job started off well enough. It had some weird spike/trough pattern to the transfer, but it averaged out to 60+MB/s so I was ok with it. I even started a second job that seemed to be running ok to. Then I went home. Around midnight one of the jobs simply stopped transferring data. Around 1:30AM, so did the other one. Even a replication job stopped working. The storage device registers no activity, these jobs are simply hung. Also notice that Veeam is blaming the source this time, that's a new twist.
I'll be adding these logs and info to the ticket.