Page 1 of 4

Backup size larger than

Posted: May 25, 2013 12:18 pm
by siziegler

I am backing up a VM with 100 GB HD size using VEEAM 6.5. Inside the guest OS (Server 2012) I can see in Explorer that 27 GB of 100 GB is actually used. Interestingly an active full backup of this VM reports:

Size:100 GB
Read: 59.9 GB
Transferred: 49.3 GB

How is it possible that there is more data transferred than what is actually shown in the guest OS? I have two more VM's that behave correctly, means less data is transferred than what is see in the guest OS (also Server 2012).


Re: Backup size larger than

Posted: May 25, 2013 1:24 pm
by Vitaliy S.
Hi Silvio,

Most likely you already had some data written on this disk before, that's is why Veeam has to read 59.9 GB of data. Please be aware that with NTFS, when you write data to the disk and then delete it, the data doesn't actually get removed from the disk, so virtual disk blocks still contain the same dirty data blocks. If you want to make your full backups smaller, then you should sdelete your virtual disks. Please search these forums for existing topics describing how to use this utility.


Re: Backup size larger than

Posted: May 25, 2013 4:01 pm
by siziegler
sdelete -z did the trick! Thanks for quick reply.


Re: Backup size larger than

Posted: May 27, 2013 7:21 am
by v.Eremin
You’re welcome. Should any other questions arise, don’t hesitate to contact us. Thanks.

[MERGED] Confused about disk exclusions

Posted: Jun 02, 2013 11:30 am
by BogBeast
Hi, I am new to VEEAM, I am dabbling at home prior to getting the 'proper' test environment going at work.

I am targeting my virtual WHS 2011 (server 2008R2 really) for backup and using the Exclusions button to only select the first two disks for backup (the remaining 3rd has data on I don't need to backup).


The disks are a thick 60GB boot with 28GB used + thin 500GB with 65GB used.

Have done that and clicking recalculate I end up with a large figure that:

- Does not match the size of the two disks I want to backup.
- Does not match the size of the whole VM (VEEAM report 795GB - my calculation is 642 for all the data on all the disks)

Ignoring that, I go ahead and compete a backup and end up with file on the repository that is 213Gb and a log:


It seems to have detected the correct sizes of the two disks to be backed up (561) but some how has read 285GB from 96GB on the disks :(

If I go to restore the VM files i can see the correct files are stored:


and If i do a file level restore I can see that only 96 is stored as files:


I must be doing something stupid to turn 96Gb into 213 in one backup (I have tried it a couple of times with different settings, both incremental and reverse) - can someone point me in the right direction ???

Many thanks...

Re: Backup size larger than

Posted: Jun 02, 2013 4:43 pm
by Gostev
Same cause... dirty virtual disk data blocks belonging to the deleted files.

Re: Backup size larger than

Posted: Jun 02, 2013 11:29 pm
by BogBeast
Thanks Gostev, I will try Sdelete, however the 500Gb was a newly created had disk so I am nor sure why it would have have data remnants in it.

Re: Backup size larger than

Posted: Jun 03, 2013 9:30 am
by Dima P.

Please let us know about the results of the sdelete procedure!
Thank you

Re: Backup size larger than

Posted: Jun 03, 2013 11:09 am
by foggy
Please also keep in mind that running sdelete against a thin disk will inflate it to its maximum provisioned size.

Re: Backup size larger than

Posted: Jun 07, 2013 8:51 pm
by BogBeast
Thanks for the advice. I am aware of the inflation issue, I decided to go with UberAlign ... uberalign/

Worked with no problem and seems to have done the trick on my backup sizes:


although the duration has gone up quite alot !

Many thanks...

Re: Backup size larger than

Posted: Jun 10, 2013 8:35 am
by v.Eremin
although the duration has gone up quite alot !
I’m wondering whether CBT was used in this case or not – you can check it in the job statistics window, looking for “[CBT]” metric. Also, there is an existing discussion regarding similar issue; might be useful.

In addition, what about incremental runs – does VB&R still read the whole VM image or only the portion of data that is known to have changed since the last run?


[MERGED] Unusually Large Backup Job?

Posted: Jul 16, 2013 12:25 pm
by mrstorey

I'm new to Veeam and this forum so go easy on me! :)

I wondered if any of you knew why a recent successful full backup of a server has ended up larger than the apparent data stored on it?

ESXi 5.0 U1, vCenter 5.1, Veeam B&R 6.5, backing up to a Windows 2008 R2 repository using a reverse incremental job. This is the first, and therefore full backup.

Server in question is a 2008 R2 VM with 3 VMDK's / Disks:

C:\ - Size 40gb, Used 16.4gb
E:\ - Size 60gb, Used 46.4gb
F:\ - Size 836gb, Used 195gb.

Total Used = 257.8gb

But the backup report for the server gives me these figures:

Size - 937gb (Correct)
Read - 907.6gb (Hmm...why is this? Most of it is just empty space?)
Transferred - 553.7gb (Over double the used space)

My guess is that Veeam is 'seeing' some additional data which isn't showing in the OS - I've cleared the recycle bin, but can't see where Veeam is finding this extra data?

Any ideas? Happy to log a support call, but it seems most of the Veeam support guys live on this forum anyway! :)

Thanks in advance,

Re: Backup size larger than

Posted: Jul 16, 2013 12:55 pm
by foggy
Alex, please review the topic you've been merged into for the answer. If you still have any questions, feel free to ask here. Thanks.

Re: Backup size larger than

Posted: Jul 16, 2013 1:09 pm
by mrstorey
Aha - apologies. I didn't search the forums properly - apologies!

Will give it a whirl now, thanks.

Re: Backup size larger than

Posted: Jul 16, 2013 2:10 pm
by mrstorey
Ok - looks like Sdelete Will indeed do the job....but unless I'm missing something, it highlights a potential risk which we'll have to think about managing - here's the hypothetical scenario I'm thinking...:

- I have a server, with a 500gb disk, 100gb is used.
- I defrag it, sdelete -z to zero it and take a nice efficient full backup.
- Someone comes along and writes 395gb data temporarily, and immediately deletes it.
- The next incremental is +395gb (although a bit less thanks to dedupe)
- Someone writes a different bunch of data and leaves it there
- The next incremental is again, +395gb

...and so on.....

The only way I see to mitigate this is to ensure users have don't use (or use Veeam to backup) VM's with large temp / scratch storage, or we risk filling our backup repositories with 'dead' data?

Have I understood this correctly?