Comprehensive data protection for all workloads
Post Reply
vihag
Novice
Posts: 9
Liked: never
Joined: May 17, 2011 3:34 am
Contact:

Extremely Large Backup Size - Larger than Original Data

Post by vihag » May 17, 2011 8:48 pm

I'm using Veeam 5 to backup an Exchange 2003 SP2 guest. This server was recently P2V'd. I'm having an issue with the size of the .vbk file. The data on the VM consists of 2 drives. One 80GB volume with 16GB in use, and one 300GB with 54GB in use. Datastore block size is 4MB. The .vbk file for this backup is 140GB. The compression and deduplication options are the defaults.

I was using Shadow Protect to back this server up prior to the P2V, and a full backup was 47GB. In fact, a full SP backup and a month of incrementals was not 140GB. Where is the other 70GB coming from?

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » May 17, 2011 9:34 pm

Not sure, but I would recommend that you give defrag+wipe (sdelete) a try, and then perform a full backup again to see if it changed the VBK size or not.

vihag
Novice
Posts: 9
Liked: never
Joined: May 17, 2011 3:34 am
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by vihag » May 18, 2011 3:47 pm

I'm not sure what you mean by wipe. The drives don't have any unneccesary files on them. To defrag the database drive, I would have to dismount the database which would require that the server be down for the duration of the defrag. I've never seen a defrag increase free space, that's not what it does, but perhaps when we are looking at a virtual file system on top of a vmfs datastore things are different?

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » May 18, 2011 3:59 pm

Basically what I was referring to is described in these threads, please have a look: V5 Reversed Incremental continues to grow and Size Backup.

By the way, what is the exact version/build of Veeam B&R are you using?

vihag
Novice
Posts: 9
Liked: never
Joined: May 17, 2011 3:34 am
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by vihag » May 18, 2011 4:28 pm

Thanks, I'll check those links.

I'm using Veeam 5.0.2.230 64 bit (trial) on Server 2008R2.

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » May 18, 2011 4:46 pm

vihag wrote:I'm using Veeam 5.0.2.230 64 bit (trial) on Server 2008R2.
You should be good then. Try the suggestion posted above and let us know about the results. Thanks.

tsightler
VP, Product Management
Posts: 5418
Liked: 2240 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by tsightler » May 18, 2011 6:48 pm

vihag wrote:I'm not sure what you mean by wipe. The drives don't have any unneccesary files on them. To defrag the database drive, I would have to dismount the database which would require that the server be down for the duration of the defrag. I've never seen a defrag increase free space, that's not what it does, but perhaps when we are looking at a virtual file system on top of a vmfs datastore things are different?
Veeam is not a "file backup" solution, it is a VM backup solution. It looks at a VMDK file for used blocks, i.e. blocks that contain any data, or really, any block to which any data has ever been written. When you delete files, the data that is in those blocks is not deleted, so the only way to "reclaim" the space is to "wipe" the data with tools like sdelete, which basically writes zeros to the "free" blocks on the filesystem.

Think about it this way, if I had a 10GB VMDK that was empty, then I dropped a 9GB DVD image on the filesystem, then deleted it, what would Veeam backup? Well, since it's looking at it from the VMDK layer, that 9GB of data is still there, present in the VMDK, and, since it's Veeam's job to backup the VMDK exactly as it is, well, it will backup the 9GB of "deleted" data.

vihag
Novice
Posts: 9
Liked: never
Joined: May 17, 2011 3:34 am
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by vihag » May 20, 2011 3:42 am

I understand how it works now. I was thinking Windows filesystem, but Veeam is backing up the vmfs. I was looking at it as still being a backup of a physical box. I also understand sdelete now. I ran a test on my laptop. What a cool utility, but what from sysinternals isn't? 8)

I am going to schedule some downtime over the weekend and run sdelete and a defrag. I analyzed the drive and it's all red! 99% file fragmentation! Holy crap, I've never seen that before!

Gostev
SVP, Product Management
Posts: 24789
Liked: 3522 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Gostev » May 21, 2011 5:26 pm

First defrag, then sdelete. The order is important here :)

vihag
Novice
Posts: 9
Liked: never
Joined: May 17, 2011 3:34 am
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by vihag » May 21, 2011 6:09 pm

Well, the sdelete made a huge difference. 140GB down to 46GB! Thanks for the help!

Is sdelete something that should be ran occasionally, or is one time all that is needed after a P2V?

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » May 22, 2011 7:29 pm

I would do defrag+sdelete only before creating full backup, as on incremental run it will produce lots of changed blocks on virtual disk, which is the last thing you want to have.

Nevertheless, defrag+sdelete is surely recommended when any major disk modifications (for ex. P2V) are performed.

Daveyd
Expert
Posts: 283
Liked: 11 times
Joined: May 20, 2010 4:17 pm
Full Name: Dave DeLollis
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Daveyd » May 22, 2011 11:45 pm

Vitaliy S. wrote:I would do defrag+sdelete only before creating full backup, as on incremental run it will produce lots of changed blocks on virtual disk, which is the last thing you want to have.

Nevertheless, defrag+sdelete is surely recommended when any major disk modifications (for ex. P2V) are performed.
I assume sdelete is run on each VM's vmdk? Is it as simple as running sdelete -c –z C: ?

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » May 23, 2011 7:53 am

That's right.

tfleener
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:59 pm
Full Name: Tom Fleener
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by tfleener » Jun 04, 2011 6:16 pm

Is there any advantage to using the -z option for sdelete 1.51?

The -z writes for DOD security ... and therefore writes x00FF, where as the -c writes all x0000 pattens.

Is the following true ?

if the -z option is used.... all blocks would be copied, but probably compressed very well ?
if the -c option is used .... blocks are not copied ?

Gostev
SVP, Product Management
Posts: 24789
Liked: 3522 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Gostev » Jun 04, 2011 7:00 pm

Results should be absolutely identical in both cases because of source-side dedupe that we do.

larry
Expert
Posts: 387
Liked: 92 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

large backup file size

Post by larry » Jul 12, 2011 2:32 pm

[merged]

I am backing up a Windows2003 server which had 1.5 tb in sql databases on it. It now has 500 gig of used disk space. My backups are 1tb in size. I did a new full backup , still 1tb size. If I go to the server in do properties of the drive, only 500 gig is in use. What can I be missing? My backups are normally way smaller than the size of the VM.

larry
Expert
Posts: 387
Liked: 92 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by larry » Jul 14, 2011 1:57 pm

If you use sdelete be sure veeam dosn't start a job. I did this on a allmost empty 1.5 tb drive ( was full but delete all the files ) , 3 hours into sdelete my Veeam job started, 2 hours later I ran out of space in the datastore becuase the veeam snapshot was now 400gig in VC. Then you cant commit the snapshot becuase no space, the VM crashes. Luckly I had spare SAN space and extened the datastore to bail me out but it took me 7 hours to get the VM running again. I am still working on it so I don't know if my backup size is smaller yet, two more 1.5 tb drives that I need more coffee before I start a sdelete on them. Hope I save someone else.

larry
Expert
Posts: 387
Liked: 92 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by larry » Jul 20, 2011 7:59 pm

One more oh no with sdelete, a thin VM disk will expand to full size when sdelete is done. too manys thins and you run out of disk space deleteing stuff.

larry
Expert
Posts: 387
Liked: 92 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by larry » Jul 21, 2011 5:56 pm

just an update. After doing the sdelete my backups went from over 1 tb to 350 gig.
A couple of things I learned.
Put SQL logs on a fat disk and run sdelete at night on the disk.
Give the SQL develpers a temp drive for large temp databases. After they are done just delete the drive. ( quicker than sdelete )
Create the sql log files large enough for a couple of days then clear them out after each backup. This way the logs are using the same disk blocks everyday.

sdelete - be sure no snap shots are on before running ( vmware or san ) never run sdelete on a thin disk unless you want it to expand to full size.

filling up a vm datastore with a snapshot sucks, cant go back or forward, quick fix was to expand the datastore- remove snap shot - resize data store.

Vitaliy S.
Product Manager
Posts: 22984
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Extremely Large Backup Size - Larger than Original Data

Post by Vitaliy S. » Jul 22, 2011 9:05 am

Larry, thanks for sharing your tips.

Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 31 guests