I've done some testing over the past day or so with different targets.
I have two esxi hosts connected with cisco GBit switches, all brand new top line IBM kit but using DAS rather than SAN/NAS etc..
Veeam is running on a 2003 VM with 4 CPU 4GB RAM.
My test VM is being backup up using VA mode, best compression, local target.
LINUX target
Now, if I point this backup at a linux share on a VM on the other ESXi server I get about 55MB/sec across the LAN, peaking at about 80Mb/sec. This fully loads all 4CPU's on the ubuntu Vm.
I tried increasing the CPU count to 8 on the ubuntu server and performance did increase slightly to an average of around 65MB/sec peaking at 100.5MB/sec.
With one CPU in the ubuntu server the performance was around 17Mb/sec.
CPU usage on my veeam server never rises about about 40%, and always seems to hover around 25%ish.
Windows target
If the backup is pointed at a physical windows file server (capable of 120MB/sec across the WAN) I get an average of around 13MB/sec, peaking at 25MB/sec. At the same time the Veeam server is using around 75% CPU peaking at 100%.
So, it's clear to me that, for me, backing up to a Linux server is way faster than a windows target.
After that long winded intro, I guess my question is.......why is the CPU load apparently transferred to the Linux server during backup? I'm curious as to the mechanics behind the backup process. If I understand it better it may help with our deployment decisions in future. My other question is, how do I get 'linux performance' using only windows?

Regards
Steve