We already checked through support that shared-memory mode is used properly (#01100148).
Code: Select all
[03.11.2015 23:12:08] < 9392> cli| Creating shared memory device with name: [SharedMemDev_{8c492f15-3142-4f50-8049-5551b2b4d3d4}].
[03.11.2015 23:12:08] < 9392> cli| Opening process handle for PID: [18608].
[03.11.2015 23:12:08] < 9392> cli| Creating ring buffer: [SharedMemDev_{8c492f15-3142-4f50-8049-5551b2b4d3d4}_ForwardBuf_SharedMem].
[03.11.2015 23:12:08] < 9392> cli| Creating ring buffer: [SharedMemDev_{8c492f15-3142-4f50-8049-5551b2b4d3d4}_BackwardBuf_SharedMem].
[03.11.2015 23:12:08] < 9392> cli| Creating shared memory device with name: [SharedMemDev_{8c492f15-3142-4f50-8049-5551b2b4d3d4}]. ok.
[03.11.2015 23:12:08] < 9392> cli| Shared memory connection has been successfully accepted.
Question: Is it really possible to have a 2 x 8 Core Xeon with 144GB of RAM beeing bottlenecked by memory transfer speed over disk reads/writes or even tape writes?
We see bottleneck "network" even in tape jobs running from repo to tape server on the same machine.
I have to admit that we have quite large backup jobs with VBKs around 6TB.
Feature request: I would find it helpful to indicate correct usage of shared-mem instead of on-host TCP/IP by replacing the caption "Network" by "Shared-Mem" in the bottleneck section of the backup log or even the GUI.
Thanks,
Mike