Host-based backup of VMware vSphere VMs.
Post Reply
Tijz
Service Provider
Posts: 34
Liked: 4 times
Joined: Jan 20, 2012 10:03 am
Full Name: Mattijs Duivenvoorden
Contact:

After splitting one job into two, overall backup is larger

Post by Tijz »

Hi,

I had one backup job for all my vSphere virtual machines (41).
I've set compression to "none" and optimize for to "local target". (I'm using a deduplication appliance)

A full backup is about 4,5TB
For months daily incrementals were about 240GB.

Now I split the job into two jobs. One job backs up 19 and the other 22 VM's.
A full backup of each job is about 2.25TB, so 4.5TB together

But the daily incremental backups amount to about 400GB (200GB each) !! Almost twice as large as when I was using just one job.
How is this possible?

I'm using Veeam B&R 6.1.0.181 with vSphere 5.0.0
I'm backing up directly from an iSCSI SAN.

Thanks.
foggy
Veeam Software
Posts: 21139
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: After splitting one job into two, overall backup is larg

Post by foggy »

Mattijs, Veeam B&R performs inline data deduplication on the job level (not between different jobs). Since you've split your job into several ones, data is not deduped to the same level anymore.
Tijz
Service Provider
Posts: 34
Liked: 4 times
Joined: Jan 20, 2012 10:03 am
Full Name: Mattijs Duivenvoorden
Contact:

Re: After splitting one job into two, overall backup is larg

Post by Tijz »

Oh yes you might be right. Allthough I disbabled compression and optimized for "local target" I still checked "inline deduplication".

but.... How do I determine the amount of deduped data then? I assume it's the difference between the amount of data "read" and "transferred" (when you look in the job statistics).
For both backup jobs the data "read" and "transferred" are about the same, so, as you said, no dedupe (anymore?).
But the amount "read" of both jobs together is way more then when I had just one job.

So now:
Job 1 "read" is 208GB
Job 1 "Transferred" is 206GB

Job 2 "read" is 185GB
Job 2 "transferred" is 166GB

Together about 394GB is "read".

When I had just one job it was like this:
Job "read" was 240GB
Job "transferred" was 219GB.

So still a lot less than it is now. I don't think this is due to deduplication, do you?
Or is deduplication being performed earlier?
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: After splitting one job into two, overall backup is larg

Post by Gostev »

Yes, deduplication is performed both at the source and at the target - just on the different scopes of data.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Bing [Bot] and 20 guests