Scenario is:
Main site: 3 backups around 1Tb each to local storage + 3 similar backups (with one slightly smaller due to excluding a few VMs) to a DR site over 100Mb link
Smaller site: 2 backups to local storage around 500Gb each and, again, two subset jobs sending around 400 & 300Gb to same DR site over another 10Mb link.
This was mostly running ok taking 2-3hrs for the main->DR site backups & 5hrs or so for the small->DR site backups
I'll forget what I've tried so far as it seems unusable to me so I'm about to walk out of the door heading to the DR site with a fresh copy of each local backup (from last Fridays synthetic full plus a couple of incrementals).
My plan for today/tomorrow is to simply re-instate site->dr site backup jobs just so I get things going off-site again whilst I work out what I need to do differently to get backup copy jobs working, but given the above type sizes, how long would one reasonably expect a first-run copy job to take? I'm not planning on deleting the WAN accell's for now so am I right to assume that these should still be useful?
Storage is all reasonable speed FC SAN (local copies of large files run @ 250MB/s or so for reference).
Another thing I'm struggling with on copy jobs is to establish exactly where a job is in the process - I've seen jobs with a run-time of 11+hrs but which have only copied 5Gb of actual data. This job will have tied up the WAN accell for all that time, whilst another (of the two other) copy jobs could otherwise have been working?
After how excited I've been at getting this all working I've been bitterly disappointed with where I've got to so far...

Paul