maybe this is related slightly to vmware-vsphere-f24/health-check-on-copy-backup-job-stalled-the-job-t22409.html.
Since we moved our remote-backup to ReFS we get quite good data reduction and we are rather pleased with it! But of course the previously sequential data now gets more and more fragmented on the disk because of the block-cloning. While we have a dedicated SAN on the main site, the remote site has to do with a rather simple Synology 1524 NAS, with only a 1GB conenction in our setup and a bunch of relatively slow 5400RPM 2TB spindles. We now do a weekly health-check on our jobs, spread out over as many days as we have jobs, so only one job does health check on a given day, to prevent the IO from filling up. Still it takes a bit longer than I'd like. As health-check runs BEFORE the actual copy-job, and the amount of data growing, in our setup that means the health check sometimes runs for several hours.
Wouldn't it be nice to first do the copy-job and THEN do the health-check? I think Veeams primary job is to get the data in a safe place as quickly as possible. Given that TCP SHOULD cover for transmission errors, it's rather unlike that corruption would occur often anyway. And even if errors were found, which I've never seen in our setup so far, the regular repair-functionality could kick in.