Yuki wrote:Hmmm i want to hear from Veeam guys on this subject. We have a proxy VM deloyed on all our hosts. And even for local replication we have compression enabled. Proxies have 8 cores and are never pegged at 100% CPU (except when we push to remote site).
I think you'll find that I count as one of the "Veeam guys".

I currently work for Veeam as a Solutions Architect focused primarily on B&R. I spend my days training customers and partners on how to deploy B&R.
The way our architecture works is very simple, a VeeamAgent.exe process is started on the source proxy, and a VeeamAgent.exe process is started on the target proxy and they connect to each other. The source proxy compresses the data, sends it to the target proxy, which then decompresses the data. If the same server is used for both the source and target proxy it doesn't really change the process, the server will have two VeeamAgent.exe processes which connect to each other locally. It doesn't make a lot of sense to waste the CPU cycles to send compressed data between two processes on the same VM.
If you need more proof, here's a quick log of a full replication job of a small VM with compression enabled:
Code: Select all
2/5/2013 11:14:59 PM :: Job started at 2/5/2013 11:14:56 PM
2/5/2013 11:14:59 PM :: Building VM list
2/5/2013 11:15:43 PM :: VM size: 45.0 GB (26.6 GB used)
2/5/2013 11:15:43 PM :: Changed block tracking is enabled
2/5/2013 11:15:45 PM :: Preparing next VM for processing
2/5/2013 11:15:45 PM :: Processing 'srv01'
2/5/2013 11:29:40 PM :: All VMs have been processed
2/5/2013 11:29:41 PM :: Load: Source 10% > Proxy 66% > Network 17% > Target 99%
2/5/2013 11:29:41 PM :: Primary bottleneck: Target
2/5/2013 11:29:41 PM :: Job finished at 2/5/2013 11:29:41 PM
and then with compression disabled:
Code: Select all
2/5/2013 11:34:42 PM :: Job started at 2/5/2013 11:34:40 PM
2/5/2013 11:34:42 PM :: Building VM list
2/5/2013 11:35:25 PM :: VM size: 45.0 GB (26.6 GB used)
2/5/2013 11:35:25 PM :: Changed block tracking is enabled
2/5/2013 11:35:27 PM :: Preparing next VM for processing
2/5/2013 11:35:27 PM :: Processing 'srv01'
2/5/2013 11:48:55 PM :: All VMs have been processed
2/5/2013 11:48:57 PM :: Load: Source 5% > Proxy 32% > Network 38% > Target 99%
2/5/2013 11:48:57 PM :: Primary bottleneck: Target
2/5/2013 11:48:57 PM :: Job finished at 2/5/2013 11:48:57 PM
Notice that the second job with compression disabled finished slightly faster (it could have been even faster, but the target, a slow iSCSI array, was the bottleneck for both runs) however, the CPU load was cut almost in half by disabling compression.