by foggy » Tue Jul 16, 2013 4:00 pm people like this post
Yes, your understanding is generally correct. However, I would say that this type of activity (copying and immediately removing large amounts of data) is not very typical. However, to avoid space issues in such cases you can schedule sdelete to reclaim unused blocks prior to running active full.
by mrstorey » Wed Jul 17, 2013 9:32 am people like this post
Ok great - thanks. Unfortunately these servers were originally built with large (and empty) vmdk's so I'm actually working to migrate the data to disks of more sensible sizes now we're using Veeam, and just increase them where necessary.
Although the scenario I detailed was an edge case with such large amount of data, I guess it feels strange to me that Veeam has no way of differentiating between linked and unlinked blocks of data. I can't see a scenario why we'd ever want to back deleted data up?
Maybe this could be a feature request for upcoming versions? What do you think?
by foggy » Wed Jul 17, 2013 10:08 am people like this post
Yes, this comes down to NTFS design. Veeam B&R is a block-level image-based solution that cares only about changed blocks, but not their contents. If the block has changed (though it was then marked as free) since last backup, it will be backed up.
by mrstorey » Wed Jul 17, 2013 10:41 am people like this post
Absolutely, I understand - I don't want you to think I was suggesting it's Veeam's 'fault'. I just was thinking how we could overcome this operational drawback.
I guess this 'wouldn't it be nice if?' feature request wouldn't be possible to implement, since I imagine there's no way easy way to determine at a block level which was live data, and backup only the blocks which contained this live data.
....and I guess even if you could, it probably wouldn't create a backup that could restore an entire disk or virtual machine ..!
by mv43 » Wed Aug 28, 2013 3:38 pm people like this post
First full backup is 303GB of which the total disk space is 323GB and total used space is only 58GB!! All my guests are on local storage and this same server also has Veeam installed which has an iSCSI connection/volume to my NAS. None of my other guests have this issue.. any reason why this is happening?
by foggy » Wed Aug 28, 2013 3:47 pm people like this post
Michael, please see the explanations given in this topic. Basically, the reason is most likely the dirty virtual disk data blocks belonging to the deleted files. Feel free to ask additional questions if you need further clarification. Thanks.
by dellock6 » Mon Oct 07, 2013 12:51 pm 1 person likes this post
Veeam is an image-based backup solution, not file based. So even if you delete files inside the VM the backup will still have the same size. The only way to shrink backups is to first run sdelete inside the VM, move it into another datastore via storage vmotion and choosing to convert it to thin (or keeping it thin if it was already), and then run a new active full backup in Veeam.
by rct » Mon Oct 07, 2013 2:41 pm people like this post
Thanks for your answer I had seen this kind of solution but it will be rather long for our VM (3.4TB of data) to do it : - zeroing all the free space (dd here because on Linux) - Create multiple VMFS3 datastores - Storage VMotion disks individually with thin provionning - Storage VMotion back to the initial VMFS5 datastore
by dellock6 » Mon Oct 07, 2013 2:55 pm people like this post
I should go and check to be sure, but as far as I remember from a certain version of vSphere you do not need anymore to jump from VMFS5 to 3 and back in order to shrink a thin disk, so at least it would be only 3 steps and not 4.
by dellock6 » Mon Oct 07, 2013 4:11 pm people like this post
Thanks for confirmation, I was not sure as I said Looking at the KB, you can also leverage vmkfstools without the need to have two datastores, obviously the cons is you need to power off the VM while with Storage vMotion you can do it live. I really hope SE sparse disks will become more and more spread...