Comprehensive data protection for all workloads
Post Reply
PKaufmann
Enthusiast
Posts: 49
Liked: 5 times
Joined: Oct 05, 2016 8:00 am
Contact:

full backup file size too big

Post by PKaufmann »

Hi,

I have a strange issue here, hope someone has an idea or can help..

We are using a DataDomain as the repository.
Today I was checking the backup file sizes via "files" in the backup&recovery console.

The full backup files (vbk) of all jobs with incremental forever (no synthetic full) are way too large.

Examples:
VM Size: Used Storage in vcenter: 892,64 GB
VBK File on DataDomain: 10,2 TB

VM Size: Used Storage in vcenter: 49,99 GB
VBK File on DataDomain: 2,0 TB

I don't know what's going on. Looks a little bit like the merging process is not working correctly and the VBK File just becomes bigger and bigger..
As Dedup on the DataDomain works pretty well, I can't tell if we loose some diskspace because of this..

Is there kind of a cleanup-task for this or do I have to create new active/synthetic fulls to get rid of these large files?

Thanks for your help.

PTide
Product Manager
Posts: 5894
Liked: 594 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: full backup file size too big

Post by PTide »

Hi,

That is definitely something you should show to our support team, have you done that? Also, have you tried to run compact operation?

Thanks

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

When performing merges on a forever forward incremental chain to a DDBoost repository, Veeam B&R appends data to the file rather than re-uses unused space inside it, for performance reasons. That's why periodic synthetic fulls are recommended in this case, since they prevent full from growing infinitely.

PKaufmann
Enthusiast
Posts: 49
Liked: 5 times
Joined: Oct 05, 2016 8:00 am
Contact:

Re: full backup file size too big

Post by PKaufmann »

foggy wrote:When performing merges on a forever forward incremental chain to a DDBoost repository, Veeam B&R appends data to the file rather than re-uses unused space inside it, for performance reasons. That's why periodic synthetic fulls are recommended in this case, since they prevent full from growing infinitely.
so that is a known issue in combination of DataDomain and Veeam ?

PTide wrote: That is definitely something you should show to our support team, have you done that? Also, have you tried to run compact operation?
If it's a known issue, makes no sense to open a support case, right ?
I will try the compact operation with one backup job for testing. thanks.

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

PKaufmann wrote:so that is a known issue in combination of DataDomain and Veeam ?
Not only DataDomain but other dedupe appliances as well.

PKaufmann
Enthusiast
Posts: 49
Liked: 5 times
Joined: Oct 05, 2016 8:00 am
Contact:

Re: full backup file size too big

Post by PKaufmann »

compact is working

Virtualredse
Influencer
Posts: 12
Liked: 1 time
Joined: Sep 13, 2018 8:38 am
Full Name: Matthew Oldham
Contact:

[MERGED] Backup Copy and Forever Incremental maintenance

Post by Virtualredse »

Hi

Now I know this area is probably not a new topic or point of discussion, but I am trying to get it clear in my head so we can better manage some secondary destination capacity.

We have a rather large Linux job which takes around 200 VMs nightly and then we run an offsite backup copy job after this to a DDomain device, with 14 days retention. What I have noticed over the past few months is the rate of consumption has been incredibly high. After starting to dig a bit deeper I found a fairly large number of VMs had a huge "Backup Size" on disk, an example being a 2TB RHEL VM with Backup size listed as 78TB under the copy job properties. This is the largest I could find but many VMs are listed as between 15TB and 30TB for 500GB on disk which when pooled together represents a very large proportion of the storage.

So, I am wondering how other people are managing such growth as incrementals are continually merged ?
Are people scripting to schedule an active full and refresh the full in the chain or are people using Full backup maintenance for removing items / defragmenting ? and if so how effective is that in keeping things at a manageable level.

A customer I am working with is growing fairly rapidly so requirements will only increase in the near future and I would like to get some maintenance practices in place asap

Appreciate any thoughts :D

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

Hi Matthew, please see above for explanation. To avoid this behavior, it is recommended to enable periodic compact operation or have GFS retention in place (new fulls are created there as well while old ones go away according to GFS retention).

Virtualredse
Influencer
Posts: 12
Liked: 1 time
Joined: Sep 13, 2018 8:38 am
Full Name: Matthew Oldham
Contact:

Re: full backup file size too big

Post by Virtualredse »

Ok thanks for that Alexander, so those are the available options to us ..... now to startup a cleanup operation and get one of those practices in place!

Cheers

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

[MERGED] Backup Size of a VM is a lot bigger than data size

Post by chsuscale »

Hi there,

Ive got a VM *130* which is configured 100GB. It's a linux VM. Can anyone help me explaining why the vbk file is that much bigger than the data?
At the "initial" Backup Job al is correct. This "problem" is only there at the Copy Job to the StoreOnce.

Image

DGrinev
Expert
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Backup Size of a VM is a lot bigger than data size

Post by DGrinev »

Hi and welcome to the Veeam community!

The issue comes from the Catalyst StoreOnce storage that doesn't support the cleaning of the removed data blocks. Thanks!

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

Re: Backup Size of a VM is a lot bigger than data size

Post by chsuscale »

Thanks for your answer - did not realize that I got an answer :S
Is there a solution for that?
Or should I test CIFS instead of StoreonceProtocol?

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

Since you already have GFS retention enabled, this prevents the latest full (which is the part of the regular chain) to grow infinitely - each time GFS full backup is offloaded, the regular one is created from scratch copying only those blocks that comprise the required VM state and skipping unused ones. GFS restore points though will still contain unused data blocks (you can see this on your picture), so there's some space overhead.

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

Re: full backup file size too big

Post by chsuscale »

I'm not sure if I understand you correctly, for me it looks like growing is more or less ifninitely..
Image

veremin
Product Manager
Posts: 18421
Liked: 1822 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: full backup file size too big

Post by veremin »

But the screenshot proves the foggy's assumption:

- GFS full backups the ones marked with M(onthly) and W(eekly) labels are growing in size, since they contain unused data blocks
- R(egular) full backup occupies much less space

Thanks!

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

And overall growth of the full backup file with time is also expected since data inside guest OS tends to grow.

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

Re: full backup file size too big

Post by chsuscale »

so in combination with storeone there's no possibility to "reset", "compact" or whatever?
The machine size itself in vmware is 100GB

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

Right, compact is not supported on StoreOnce. Could you please check what is the size of the regular full at the moment it is created on the day when GFS full is scheduled at (prior to any increment merges)? It should be less than the amount of original data due to compression.

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

Re: full backup file size too big

Post by chsuscale »

for clarification:
you mean, if "Full Backup Restore Point" is setup on Sunday. I check the latest size of full backup before Sunday?

veremin
Product Manager
Posts: 18421
Liked: 1822 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: full backup file size too big

Post by veremin »

Nope, the size of regular full backup right after GFS restore point will be created on Sunday. Thanks!

chsuscale
Novice
Posts: 7
Liked: never
Joined: Jan 09, 2020 9:21 am
Contact:

Re: full backup file size too big

Post by chsuscale »

So this is the view:
Image
You're right that the first full after GFS is correct. But the others are growing infinitely.
Last time we were at 223GBs max now we are in 272GBs.

foggy
Veeam Software
Posts: 20106
Liked: 1878 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: full backup file size too big

Post by foggy »

As I mentioned above, the overall growth of the full backup file with time is expected depending on the data change pattern inside guest OS.

mytsk
Service Provider
Posts: 4
Liked: never
Joined: Oct 02, 2018 2:20 pm
Full Name: Martin
Contact:

Re: full backup file size too big

Post by mytsk »

Sorry for necroing this thread but just realized we're facing the same issue with our StoreOnce and I have a follow up question.

After merge is completed and old data is removed I presume what is left in the file is white space (zeros) yes/no?
If there really is white space gaps in the files how is StoreOnce looking at this, would it be deduped or will it consume space?

When looking at the catalyst items in the StoreOnce perspective all we can see is the data size, what the file size actually is after deduplication cant be seen. Any ideas how to acquire such info (actual or guesstimate)?

Thx!

Post Reply

Who is online

Users browsing this forum: No registered users and 57 guests