Host-based backup of VMware vSphere VMs.
Post Reply
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Replication: Question regarding amount of transferred data

Post by pinkerton »

Hi,

just performed an initial replication of a VM with attached vRDM that holds about 1.2 TB of data. Transferred were only about 800GB:

Image

Does the "1,9x" behind the amount of transferred data mean the compression ratio? The notification email of this job hover shows 1.0 as compression ratio. Is this a bug?

I have another question: Actually how does compression for replication work when there is no agent used on the target ESXi that can extract the data? I actually thought compression for replication was not available at all.

Thanks!
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Replication: Question regarding amount of transferred da

Post by dellock6 »

From what I understoo, that value is a combination of compression and deduplication ratio, do not know if it also consider CBT. Anyway, tells you basically how much you saved instead of moving the whole VM.

PS: why you placed the image in a website full of spam and other bad code????
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Replication: Question regarding amount of transferred da

Post by tsightler »

pinkerton wrote: I have another question: Actually how does compression for replication work when there is no agent used on the target ESXi that can extract the data? I actually thought compression for replication was not available at all.
For replication you should have two proxies, one on the source side and another on the target side. These proxies handle the compression between the two sides. If you don't have this then you are not getting the most out of V6 replication (except perhaps in the case of local replication).
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Re: Replication: Question regarding amount of transferred da

Post by pinkerton »

Sorry for the screenshot, unfortunately I copied the wrong link.

Regarding compression: We are only using local replication so there is only one proxy. I just had a look at the documentation and it seems that in this case - as expected - no compression is performed:

Note that in on-site replication scenarios, the source-side agent and the target-side agent may run on the same backup proxy server. In this case, no compression is performed.

I guess the lower amount of transferred is caused due to the following:

While copying, the source-side agent performs additional processing - it consolidates the content of virtual disks by filtering out overlapping snapshot blocks, zero data blocks and blocks of swap files

So I guess this is due to the filtering of overlapping blocks than, though I'm not sure what "snapshot blocks" actually means.

However, the VM has 1.2TB of data and only about 800 were transfered - and the replica doesn't miss anything. So there must have been something going on!
foggy
Veeam Software
Posts: 21139
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Replication: Question regarding amount of transferred da

Post by foggy »

Hello!

It's not surprising that the actually transferred data is much less than the whole amount of read data (new replication engine is in many ways about saving your network resources), but what is more interesting is the VM Size numbers in the Action list which do not correspond with the processed data estimates. It would be much appreciated if you could open a support case providing the full logs and post the ID here so we can review them and understand the reason of such mismatch. Thanks.
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], Semrush [Bot] and 74 guests