-
- Novice
- Posts: 9
- Liked: never
- Joined: May 17, 2011 3:34 am
- Contact:
Extremely Large Backup Size - Larger than Original Data
I'm using Veeam 5 to backup an Exchange 2003 SP2 guest. This server was recently P2V'd. I'm having an issue with the size of the .vbk file. The data on the VM consists of 2 drives. One 80GB volume with 16GB in use, and one 300GB with 54GB in use. Datastore block size is 4MB. The .vbk file for this backup is 140GB. The compression and deduplication options are the defaults.
I was using Shadow Protect to back this server up prior to the P2V, and a full backup was 47GB. In fact, a full SP backup and a month of incrementals was not 140GB. Where is the other 70GB coming from?
I was using Shadow Protect to back this server up prior to the P2V, and a full backup was 47GB. In fact, a full SP backup and a month of incrementals was not 140GB. Where is the other 70GB coming from?
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Not sure, but I would recommend that you give defrag+wipe (sdelete) a try, and then perform a full backup again to see if it changed the VBK size or not.
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 17, 2011 3:34 am
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
I'm not sure what you mean by wipe. The drives don't have any unneccesary files on them. To defrag the database drive, I would have to dismount the database which would require that the server be down for the duration of the defrag. I've never seen a defrag increase free space, that's not what it does, but perhaps when we are looking at a virtual file system on top of a vmfs datastore things are different?
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Basically what I was referring to is described in these threads, please have a look: V5 Reversed Incremental continues to grow and Size Backup.
By the way, what is the exact version/build of Veeam B&R are you using?
By the way, what is the exact version/build of Veeam B&R are you using?
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 17, 2011 3:34 am
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Thanks, I'll check those links.
I'm using Veeam 5.0.2.230 64 bit (trial) on Server 2008R2.
I'm using Veeam 5.0.2.230 64 bit (trial) on Server 2008R2.
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
You should be good then. Try the suggestion posted above and let us know about the results. Thanks.vihag wrote:I'm using Veeam 5.0.2.230 64 bit (trial) on Server 2008R2.
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Veeam is not a "file backup" solution, it is a VM backup solution. It looks at a VMDK file for used blocks, i.e. blocks that contain any data, or really, any block to which any data has ever been written. When you delete files, the data that is in those blocks is not deleted, so the only way to "reclaim" the space is to "wipe" the data with tools like sdelete, which basically writes zeros to the "free" blocks on the filesystem.vihag wrote:I'm not sure what you mean by wipe. The drives don't have any unneccesary files on them. To defrag the database drive, I would have to dismount the database which would require that the server be down for the duration of the defrag. I've never seen a defrag increase free space, that's not what it does, but perhaps when we are looking at a virtual file system on top of a vmfs datastore things are different?
Think about it this way, if I had a 10GB VMDK that was empty, then I dropped a 9GB DVD image on the filesystem, then deleted it, what would Veeam backup? Well, since it's looking at it from the VMDK layer, that 9GB of data is still there, present in the VMDK, and, since it's Veeam's job to backup the VMDK exactly as it is, well, it will backup the 9GB of "deleted" data.
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 17, 2011 3:34 am
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
I understand how it works now. I was thinking Windows filesystem, but Veeam is backing up the vmfs. I was looking at it as still being a backup of a physical box. I also understand sdelete now. I ran a test on my laptop. What a cool utility, but what from sysinternals isn't?
I am going to schedule some downtime over the weekend and run sdelete and a defrag. I analyzed the drive and it's all red! 99% file fragmentation! Holy crap, I've never seen that before!
I am going to schedule some downtime over the weekend and run sdelete and a defrag. I analyzed the drive and it's all red! 99% file fragmentation! Holy crap, I've never seen that before!
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
First defrag, then sdelete. The order is important here
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 17, 2011 3:34 am
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Well, the sdelete made a huge difference. 140GB down to 46GB! Thanks for the help!
Is sdelete something that should be ran occasionally, or is one time all that is needed after a P2V?
Is sdelete something that should be ran occasionally, or is one time all that is needed after a P2V?
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
I would do defrag+sdelete only before creating full backup, as on incremental run it will produce lots of changed blocks on virtual disk, which is the last thing you want to have.
Nevertheless, defrag+sdelete is surely recommended when any major disk modifications (for ex. P2V) are performed.
Nevertheless, defrag+sdelete is surely recommended when any major disk modifications (for ex. P2V) are performed.
-
- Veteran
- Posts: 283
- Liked: 11 times
- Joined: May 20, 2010 4:17 pm
- Full Name: Dave DeLollis
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
I assume sdelete is run on each VM's vmdk? Is it as simple as running sdelete -c –z C: ?Vitaliy S. wrote:I would do defrag+sdelete only before creating full backup, as on incremental run it will produce lots of changed blocks on virtual disk, which is the last thing you want to have.
Nevertheless, defrag+sdelete is surely recommended when any major disk modifications (for ex. P2V) are performed.
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
-
- Influencer
- Posts: 21
- Liked: never
- Joined: Jun 08, 2010 2:59 pm
- Full Name: Tom Fleener
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Is there any advantage to using the -z option for sdelete 1.51?
The -z writes for DOD security ... and therefore writes x00FF, where as the -c writes all x0000 pattens.
Is the following true ?
if the -z option is used.... all blocks would be copied, but probably compressed very well ?
if the -c option is used .... blocks are not copied ?
The -z writes for DOD security ... and therefore writes x00FF, where as the -c writes all x0000 pattens.
Is the following true ?
if the -z option is used.... all blocks would be copied, but probably compressed very well ?
if the -c option is used .... blocks are not copied ?
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Results should be absolutely identical in both cases because of source-side dedupe that we do.
-
- Veteran
- Posts: 387
- Liked: 97 times
- Joined: Mar 24, 2010 5:47 pm
- Full Name: Larry Walker
- Contact:
large backup file size
[merged]
I am backing up a Windows2003 server which had 1.5 tb in sql databases on it. It now has 500 gig of used disk space. My backups are 1tb in size. I did a new full backup , still 1tb size. If I go to the server in do properties of the drive, only 500 gig is in use. What can I be missing? My backups are normally way smaller than the size of the VM.
I am backing up a Windows2003 server which had 1.5 tb in sql databases on it. It now has 500 gig of used disk space. My backups are 1tb in size. I did a new full backup , still 1tb size. If I go to the server in do properties of the drive, only 500 gig is in use. What can I be missing? My backups are normally way smaller than the size of the VM.
-
- Veteran
- Posts: 387
- Liked: 97 times
- Joined: Mar 24, 2010 5:47 pm
- Full Name: Larry Walker
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
If you use sdelete be sure veeam dosn't start a job. I did this on a allmost empty 1.5 tb drive ( was full but delete all the files ) , 3 hours into sdelete my Veeam job started, 2 hours later I ran out of space in the datastore becuase the veeam snapshot was now 400gig in VC. Then you cant commit the snapshot becuase no space, the VM crashes. Luckly I had spare SAN space and extened the datastore to bail me out but it took me 7 hours to get the VM running again. I am still working on it so I don't know if my backup size is smaller yet, two more 1.5 tb drives that I need more coffee before I start a sdelete on them. Hope I save someone else.
-
- Veteran
- Posts: 387
- Liked: 97 times
- Joined: Mar 24, 2010 5:47 pm
- Full Name: Larry Walker
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
One more oh no with sdelete, a thin VM disk will expand to full size when sdelete is done. too manys thins and you run out of disk space deleteing stuff.
-
- Veteran
- Posts: 387
- Liked: 97 times
- Joined: Mar 24, 2010 5:47 pm
- Full Name: Larry Walker
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
just an update. After doing the sdelete my backups went from over 1 tb to 350 gig.
A couple of things I learned.
Put SQL logs on a fat disk and run sdelete at night on the disk.
Give the SQL develpers a temp drive for large temp databases. After they are done just delete the drive. ( quicker than sdelete )
Create the sql log files large enough for a couple of days then clear them out after each backup. This way the logs are using the same disk blocks everyday.
sdelete - be sure no snap shots are on before running ( vmware or san ) never run sdelete on a thin disk unless you want it to expand to full size.
filling up a vm datastore with a snapshot sucks, cant go back or forward, quick fix was to expand the datastore- remove snap shot - resize data store.
A couple of things I learned.
Put SQL logs on a fat disk and run sdelete at night on the disk.
Give the SQL develpers a temp drive for large temp databases. After they are done just delete the drive. ( quicker than sdelete )
Create the sql log files large enough for a couple of days then clear them out after each backup. This way the logs are using the same disk blocks everyday.
sdelete - be sure no snap shots are on before running ( vmware or san ) never run sdelete on a thin disk unless you want it to expand to full size.
filling up a vm datastore with a snapshot sucks, cant go back or forward, quick fix was to expand the datastore- remove snap shot - resize data store.
-
- VP, Product Management
- Posts: 27356
- Liked: 2788 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Extremely Large Backup Size - Larger than Original Data
Larry, thanks for sharing your tips.
Who is online
Users browsing this forum: Bing [Bot], MILJW002 and 37 guests