Stan, Target WAN being the bottleneck means that your target WAN accelerator is busy most of the time processing data it receives from other components in the processing chain (WAN accelerator performs lots of disk I/O), while the others are waiting for it to receive the data.
Stan, what is your primary concern here? Those are incremental runs, am I right?
I would also note that typically WAN accelerator should be able to effectively use up to a 100Mb connection, and, based on the exact setup, could sometimes slow the backup copy process on links close to such or faster, providing lower performance than direct mode would. Probably, you're close to the edge.
These are both 'full' backups (with the cache pre-staged). My network link is 70mb on both ends, I was hoping to get closer to 70mb than 16mb. It also seems odd that non-WAN accelerated is roughly 2x faster than the WAN accelerated.
Are your WAN accelerators on the same server as the repository, or on a separate server? Is your global cache on a separate disk from the repository or the same? Have you looked at the I/O latency on the target WAN accelerator?
No, we are not saying that. Actual data transfer was at 70mb/s, and only took a few seconds. Most of the time, the job spend preparing that 32MB data package (reading the source data to identify the changes, lookups in target WAN accerator cache and target repository). Faster storage would reduce this time significantly, and the primary bottleneck (according to the job statistics) is target WAN accelerator's cache speed. Generally speaking, WAN acceleration is all about trading bandwidth for disk I/O. Thanks!