-
- Enthusiast
- Posts: 51
- Liked: 5 times
- Joined: Oct 05, 2016 8:00 am
- Contact:
full backup file size too big
Hi,
I have a strange issue here, hope someone has an idea or can help..
We are using a DataDomain as the repository.
Today I was checking the backup file sizes via "files" in the backup&recovery console.
The full backup files (vbk) of all jobs with incremental forever (no synthetic full) are way too large.
Examples:
VM Size: Used Storage in vcenter: 892,64 GB
VBK File on DataDomain: 10,2 TB
VM Size: Used Storage in vcenter: 49,99 GB
VBK File on DataDomain: 2,0 TB
I don't know what's going on. Looks a little bit like the merging process is not working correctly and the VBK File just becomes bigger and bigger..
As Dedup on the DataDomain works pretty well, I can't tell if we loose some diskspace because of this..
Is there kind of a cleanup-task for this or do I have to create new active/synthetic fulls to get rid of these large files?
Thanks for your help.
I have a strange issue here, hope someone has an idea or can help..
We are using a DataDomain as the repository.
Today I was checking the backup file sizes via "files" in the backup&recovery console.
The full backup files (vbk) of all jobs with incremental forever (no synthetic full) are way too large.
Examples:
VM Size: Used Storage in vcenter: 892,64 GB
VBK File on DataDomain: 10,2 TB
VM Size: Used Storage in vcenter: 49,99 GB
VBK File on DataDomain: 2,0 TB
I don't know what's going on. Looks a little bit like the merging process is not working correctly and the VBK File just becomes bigger and bigger..
As Dedup on the DataDomain works pretty well, I can't tell if we loose some diskspace because of this..
Is there kind of a cleanup-task for this or do I have to create new active/synthetic fulls to get rid of these large files?
Thanks for your help.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: full backup file size too big
Hi,
That is definitely something you should show to our support team, have you done that? Also, have you tried to run compact operation?
Thanks
That is definitely something you should show to our support team, have you done that? Also, have you tried to run compact operation?
Thanks
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
When performing merges on a forever forward incremental chain to a DDBoost repository, Veeam B&R appends data to the file rather than re-uses unused space inside it, for performance reasons. That's why periodic synthetic fulls are recommended in this case, since they prevent full from growing infinitely.
-
- Enthusiast
- Posts: 51
- Liked: 5 times
- Joined: Oct 05, 2016 8:00 am
- Contact:
Re: full backup file size too big
so that is a known issue in combination of DataDomain and Veeam ?foggy wrote:When performing merges on a forever forward incremental chain to a DDBoost repository, Veeam B&R appends data to the file rather than re-uses unused space inside it, for performance reasons. That's why periodic synthetic fulls are recommended in this case, since they prevent full from growing infinitely.
If it's a known issue, makes no sense to open a support case, right ?PTide wrote: That is definitely something you should show to our support team, have you done that? Also, have you tried to run compact operation?
I will try the compact operation with one backup job for testing. thanks.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
Not only DataDomain but other dedupe appliances as well.PKaufmann wrote:so that is a known issue in combination of DataDomain and Veeam ?
-
- Enthusiast
- Posts: 51
- Liked: 5 times
- Joined: Oct 05, 2016 8:00 am
- Contact:
Re: full backup file size too big
compact is working
-
- Influencer
- Posts: 12
- Liked: 1 time
- Joined: Sep 13, 2018 8:38 am
- Full Name: Matthew Oldham
- Contact:
[MERGED] Backup Copy and Forever Incremental maintenance
Hi
Now I know this area is probably not a new topic or point of discussion, but I am trying to get it clear in my head so we can better manage some secondary destination capacity.
We have a rather large Linux job which takes around 200 VMs nightly and then we run an offsite backup copy job after this to a DDomain device, with 14 days retention. What I have noticed over the past few months is the rate of consumption has been incredibly high. After starting to dig a bit deeper I found a fairly large number of VMs had a huge "Backup Size" on disk, an example being a 2TB RHEL VM with Backup size listed as 78TB under the copy job properties. This is the largest I could find but many VMs are listed as between 15TB and 30TB for 500GB on disk which when pooled together represents a very large proportion of the storage.
So, I am wondering how other people are managing such growth as incrementals are continually merged ?
Are people scripting to schedule an active full and refresh the full in the chain or are people using Full backup maintenance for removing items / defragmenting ? and if so how effective is that in keeping things at a manageable level.
A customer I am working with is growing fairly rapidly so requirements will only increase in the near future and I would like to get some maintenance practices in place asap
Appreciate any thoughts
Now I know this area is probably not a new topic or point of discussion, but I am trying to get it clear in my head so we can better manage some secondary destination capacity.
We have a rather large Linux job which takes around 200 VMs nightly and then we run an offsite backup copy job after this to a DDomain device, with 14 days retention. What I have noticed over the past few months is the rate of consumption has been incredibly high. After starting to dig a bit deeper I found a fairly large number of VMs had a huge "Backup Size" on disk, an example being a 2TB RHEL VM with Backup size listed as 78TB under the copy job properties. This is the largest I could find but many VMs are listed as between 15TB and 30TB for 500GB on disk which when pooled together represents a very large proportion of the storage.
So, I am wondering how other people are managing such growth as incrementals are continually merged ?
Are people scripting to schedule an active full and refresh the full in the chain or are people using Full backup maintenance for removing items / defragmenting ? and if so how effective is that in keeping things at a manageable level.
A customer I am working with is growing fairly rapidly so requirements will only increase in the near future and I would like to get some maintenance practices in place asap
Appreciate any thoughts
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
Hi Matthew, please see above for explanation. To avoid this behavior, it is recommended to enable periodic compact operation or have GFS retention in place (new fulls are created there as well while old ones go away according to GFS retention).
-
- Influencer
- Posts: 12
- Liked: 1 time
- Joined: Sep 13, 2018 8:38 am
- Full Name: Matthew Oldham
- Contact:
Re: full backup file size too big
Ok thanks for that Alexander, so those are the available options to us ..... now to startup a cleanup operation and get one of those practices in place!
Cheers
Cheers
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
[MERGED] Backup Size of a VM is a lot bigger than data size
Hi there,
Ive got a VM *130* which is configured 100GB. It's a linux VM. Can anyone help me explaining why the vbk file is that much bigger than the data?
At the "initial" Backup Job al is correct. This "problem" is only there at the Copy Job to the StoreOnce.
Ive got a VM *130* which is configured 100GB. It's a linux VM. Can anyone help me explaining why the vbk file is that much bigger than the data?
At the "initial" Backup Job al is correct. This "problem" is only there at the Copy Job to the StoreOnce.
-
- Veteran
- Posts: 1943
- Liked: 247 times
- Joined: Dec 01, 2016 3:49 pm
- Full Name: Dmitry Grinev
- Location: St.Petersburg
- Contact:
Re: Backup Size of a VM is a lot bigger than data size
Hi and welcome to the Veeam community!
The issue comes from the Catalyst StoreOnce storage that doesn't support the cleaning of the removed data blocks. Thanks!
The issue comes from the Catalyst StoreOnce storage that doesn't support the cleaning of the removed data blocks. Thanks!
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
Re: Backup Size of a VM is a lot bigger than data size
Thanks for your answer - did not realize that I got an answer :S
Is there a solution for that?
Or should I test CIFS instead of StoreonceProtocol?
Is there a solution for that?
Or should I test CIFS instead of StoreonceProtocol?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
Since you already have GFS retention enabled, this prevents the latest full (which is the part of the regular chain) to grow infinitely - each time GFS full backup is offloaded, the regular one is created from scratch copying only those blocks that comprise the required VM state and skipping unused ones. GFS restore points though will still contain unused data blocks (you can see this on your picture), so there's some space overhead.
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
Re: full backup file size too big
I'm not sure if I understand you correctly, for me it looks like growing is more or less ifninitely..
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: full backup file size too big
But the screenshot proves the foggy's assumption:
- GFS full backups the ones marked with M(onthly) and W(eekly) labels are growing in size, since they contain unused data blocks
- R(egular) full backup occupies much less space
Thanks!
- GFS full backups the ones marked with M(onthly) and W(eekly) labels are growing in size, since they contain unused data blocks
- R(egular) full backup occupies much less space
Thanks!
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
And overall growth of the full backup file with time is also expected since data inside guest OS tends to grow.
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
Re: full backup file size too big
so in combination with storeone there's no possibility to "reset", "compact" or whatever?
The machine size itself in vmware is 100GB
The machine size itself in vmware is 100GB
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
Right, compact is not supported on StoreOnce. Could you please check what is the size of the regular full at the moment it is created on the day when GFS full is scheduled at (prior to any increment merges)? It should be less than the amount of original data due to compression.
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
Re: full backup file size too big
for clarification:
you mean, if "Full Backup Restore Point" is setup on Sunday. I check the latest size of full backup before Sunday?
you mean, if "Full Backup Restore Point" is setup on Sunday. I check the latest size of full backup before Sunday?
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: full backup file size too big
Nope, the size of regular full backup right after GFS restore point will be created on Sunday. Thanks!
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jan 09, 2020 9:21 am
- Contact:
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
As I mentioned above, the overall growth of the full backup file with time is expected depending on the data change pattern inside guest OS.
-
- Service Provider
- Posts: 4
- Liked: never
- Joined: Oct 02, 2018 2:20 pm
- Full Name: Martin
Re: full backup file size too big
Sorry for necroing this thread but just realized we're facing the same issue with our StoreOnce and I have a follow up question.
After merge is completed and old data is removed I presume what is left in the file is white space (zeros) yes/no?
If there really is white space gaps in the files how is StoreOnce looking at this, would it be deduped or will it consume space?
When looking at the catalyst items in the StoreOnce perspective all we can see is the data size, what the file size actually is after deduplication cant be seen. Any ideas how to acquire such info (actual or guesstimate)?
Thx!
After merge is completed and old data is removed I presume what is left in the file is white space (zeros) yes/no?
If there really is white space gaps in the files how is StoreOnce looking at this, would it be deduped or will it consume space?
When looking at the catalyst items in the StoreOnce perspective all we can see is the data size, what the file size actually is after deduplication cant be seen. Any ideas how to acquire such info (actual or guesstimate)?
Thx!
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: full backup file size too big
Unused blocks inside backup files are marked as free and can be re-used later, upon subsequent merges. From the storage perspective though, I think those are seen as regular data blocks since the storage doesn't have visibility into the files themselves.
Who is online
Users browsing this forum: No registered users and 71 guests