foggy wrote:Bottleneck source means that data cannot be retrieved from the storage any faster. Currently source data reader is the slowest component in the data processing chain, while other components are able to process more data, but just sitting and waiting for it. Giving them the ability to process more data will result in overall backup performance increase (and bottleneck will probably shift to another component).
OK, that makes sense - it's reading from source 99% of the time, and is always making other components wait. But how then did allowing more concurrent tasks manage to get more data from the host? How can the host send data more quickly just by doing multiple streams?
You can look up previous jobs stats in the sessions History.
I can see duration, processing rate, processed, read and transferred for all sessions, but they don't tell me everything that's on the graphs. I'm interested to see what maximum transfer rate we achieved, and whether there were periods of low speeds.
The processing rate seems a bit deceptive. It's the amount of data processed divided by the job duration. But swap file and deleted blocks don't actually get read, making the rate higher than the actual read rate. But all the statistics to do with actual read rates are unavailable for all but the most recent job.