-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Large VIB file on a small static server
Hi Chris, you're right, defrag operation might be beneficial before running the full run of the backup job, but not before incremental runs.
-
- Enthusiast
- Posts: 47
- Liked: 6 times
- Joined: Mar 21, 2011 12:04 pm
- Full Name: Chris Leader
- Contact:
Re: Large VIB file on a small static server
Yep, that's my point, as part of the installation guide, maybe just a point of advice to users to check that the VMs are defragged before starting the first full run (and then a further advice to not run a defrag before the incrementals!)
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Large VIB file on a small static server
Defrag needs to be done periodically of course (not just one-time operation), and in fact it is scheduled to run automatically every week by default in the latest versions of Windows (check out your Task Scheduler). So, if you are running modern Windows versions, there is nothing you need to do before first full run or periodically - all your VMs should already be defragmented.
-
- Enthusiast
- Posts: 47
- Liked: 6 times
- Joined: Mar 21, 2011 12:04 pm
- Full Name: Chris Leader
- Contact:
Re: Large VIB file on a small static server
Ah, thanks Anton, that's interesting to know. I think that also goes towards explaining why we were having problems with the larger rollbacks on just this one VM job in particular - a long-lived 2003 box that was P2V'd a while ago, but was probably never defragged in that time, and won't have had the regular schedule of the more recent 2008 servers. These later VMs have rarely If ever given us problems in Veeam
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Large VIB file on a small static server
P2V'd servers, and especially P2V'd 2003 servers typically are not properly stripe aligned which can causes anywhere from 20-30% additional changed blocks.
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Full Name: Jody Popplewell
- Location: Yorkshire
- Contact:
Large Transferred Data - Work out what is causing it ?
[merged]
Hello,
I have a customer which has around 31 VM's in one backup pool, they want to keep 31 days on disk for this particular pool.
The transferred data totals each night around 169GB~ but the problem I have is that there are 5 VM's which are causing the issue which such a large changed data. Just 5 of them make up around 125GB~ of the total changes going to disk.
The problem I have is I point out to the customer the 5 VM's causing the problems and he cant understand why those five would be causing the problems, their main file server for example isnt one of the 5 or their exchange boxes.
There is one particular VM which is 170GB with 75GB of changed data and 41GB transferred to disk each night and they are saying they would expect that VM is be only a couple of GB's worth of changes.
I have asked them to check they are not scheduling anything like disk defrag or making local backup copies of data which we have had issues with in the past but they have stopped all that apparently.
I need a way to identify what is going on with these particular VM's.
We also recently had a SQL Server which used to transfer about 70GB a night which was running Server 2003, they deployed a new VM recently on Server 2008 and moved the DB and workload over to this VM. They are convinced they are doing the exact same thing on the new 2008 server they always did in 2003 but now we see around 5GB transferred instead of 70GB.
I am convinced Veeam isn't causing the issue as it just sees blocks compares them and backs them up after using the hashing calcs etc for the dedupe and compressing if possible. It doesn't care if it is 2003 / 2008 etc so something most be happening on the OS.
Any ideas how I can identify this change ?
Cheers
Hello,
I have a customer which has around 31 VM's in one backup pool, they want to keep 31 days on disk for this particular pool.
The transferred data totals each night around 169GB~ but the problem I have is that there are 5 VM's which are causing the issue which such a large changed data. Just 5 of them make up around 125GB~ of the total changes going to disk.
The problem I have is I point out to the customer the 5 VM's causing the problems and he cant understand why those five would be causing the problems, their main file server for example isnt one of the 5 or their exchange boxes.
There is one particular VM which is 170GB with 75GB of changed data and 41GB transferred to disk each night and they are saying they would expect that VM is be only a couple of GB's worth of changes.
I have asked them to check they are not scheduling anything like disk defrag or making local backup copies of data which we have had issues with in the past but they have stopped all that apparently.
I need a way to identify what is going on with these particular VM's.
We also recently had a SQL Server which used to transfer about 70GB a night which was running Server 2003, they deployed a new VM recently on Server 2008 and moved the DB and workload over to this VM. They are convinced they are doing the exact same thing on the new 2008 server they always did in 2003 but now we see around 5GB transferred instead of 70GB.
I am convinced Veeam isn't causing the issue as it just sees blocks compares them and backs them up after using the hashing calcs etc for the dedupe and compressing if possible. It doesn't care if it is 2003 / 2008 etc so something most be happening on the OS.
Any ideas how I can identify this change ?
Cheers
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Large VIB file on a small static server
It's pretty hard for me to believe that moving from 2003/2008 would make much difference in the amount of data being transferred, certainly not from 70GB to 5GB. There almost has to be some additional changes. My guess is perhaps maintenance plans are significantly different, but very difficult to guess. There can sometimes be some reduction when moving to a new server simply due to the fact that Windows >2008 formats disks with proper alignment by default, however, I've never seen the difference approach anything near what you are reporting.
File fragmentation can also cause huge differences as many small changes spread across the disk cause many more blocks to be changed.
File fragmentation can also cause huge differences as many small changes spread across the disk cause many more blocks to be changed.
-
- Enthusiast
- Posts: 81
- Liked: 11 times
- Joined: Jun 17, 2012 1:28 am
- Full Name: Jeremy Harrison
- Contact:
High change rate File Servers
[merged]
I have 2 file servers each around 500 gb at 2 different locations with DFS running. These servers also have peerlock installed on them. My issue is that i do daily backups and each of these servers is very slow compared to other vm backups. The change rate on these servers is around 35-45 percent each backup. I have no idea why the change rate is so high and was wondering if anyone else has had this issue? I listed DFS and peerlock to see if anyone has them and notice a high change rate? Othere than those 2 apps/services then the servers are pretty normal software wise. They are both 2003 R2. Thanks ahead of time for the help and feedback.
I have 2 file servers each around 500 gb at 2 different locations with DFS running. These servers also have peerlock installed on them. My issue is that i do daily backups and each of these servers is very slow compared to other vm backups. The change rate on these servers is around 35-45 percent each backup. I have no idea why the change rate is so high and was wondering if anyone else has had this issue? I listed DFS and peerlock to see if anyone has them and notice a high change rate? Othere than those 2 apps/services then the servers are pretty normal software wise. They are both 2003 R2. Thanks ahead of time for the help and feedback.
-
- Veteran
- Posts: 261
- Liked: 29 times
- Joined: May 03, 2011 12:51 pm
- Full Name: James Pearce
- Contact:
Re: High change rate File Servers
Try disabling the last access time stamp (it's disabled by default on 2k8). For example if there is some indexing service running daily touching every file, that would create a wealth of delta with the access time tracking enabled. Also disable any defrag jobs.
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: High change rate File Servers
Maybe peerlock also relocates/changes some blocks on your file system. If it is possible, try to disable it and then run 2 backup jobs and see whether the increment has changed or not. This would allow us to narrow down the scope of possible reasons for this high change rate.
-
- Enthusiast
- Posts: 81
- Liked: 11 times
- Joined: Jun 17, 2012 1:28 am
- Full Name: Jeremy Harrison
- Contact:
Re: Large VIB file on a small static server
vitaliy i will see if the buisness will allow me to disable this. J1mbo your suggestion around access date...does this effect the CBT rate? how do i go about changing this?
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Large VIB file on a small static server
Changing last access time stamp modifies virtual disks blocks. In other words, the more often this time stamp changes, the more blocks are reported as changed ones by VMware CBT.
Here you go the instructions of how to disable it on Windows 2003, should help you to reduce the increment file size:
http://www.windowsreference.com/windows ... n-windows/
Here you go the instructions of how to disable it on Windows 2003, should help you to reduce the increment file size:
http://www.windowsreference.com/windows ... n-windows/
-
- Veteran
- Posts: 261
- Liked: 29 times
- Joined: May 03, 2011 12:51 pm
- Full Name: James Pearce
- Contact:
Re: Large VIB file on a small static server
It creates churn because the time stamps are stored in the 4KB descriptor for each file, so each file touched creates 4KB of change on the volume. But of course the CBT isn't working in 4KB blocks (1MB IIRC?), so the resulting CBT changes resulting from all the files that have been accessed in the day could be *much* bigger, depending on how the particular file descriptors were scattered.
-
- Enthusiast
- Posts: 81
- Liked: 11 times
- Joined: Jun 17, 2012 1:28 am
- Full Name: Jeremy Harrison
- Contact:
Re: Large VIB file on a small static server
this makes total sense. ty very much. I am tryign to get approval to turn this off. our only conern is how ti may effect DFS.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Large VIB file on a small static server
CBT has a sub-block schema of 64k, but it's anyway enough to create big incremental backups if many files are touched at the same time, also because they are likely dispersed around the file system.
Maybe, after correcting all the file attributes once for all, you can also run sdelete (not! on a thin provisioned disk) right before a full backup. Since it's going to save anyway the whole VM, it does not matter to modify all the CBT data before it runs. And you are having a clean situation before the following incremental backups.
Luca.
Maybe, after correcting all the file attributes once for all, you can also run sdelete (not! on a thin provisioned disk) right before a full backup. Since it's going to save anyway the whole VM, it does not matter to modify all the CBT data before it runs. And you are having a clean situation before the following incremental backups.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 261
- Liked: 29 times
- Joined: May 03, 2011 12:51 pm
- Full Name: James Pearce
- Contact:
Re: Large VIB file on a small static server
That will certainly reduce the backup size too (once the oldest restore point reflects the cleaned space), but I'd note that Luca is suggesting using the 'zero free space' mode. As hinted at the process will inflate the VMDK to it's maximum size, if it's thin provisioned, and the underlying storage too if that is thin provisioning too. Also make sure there are no snapshots on the system before running sdelete.
Luca - thanks for the info on CBT block sizing. Did that change with v5?
Luca - thanks for the info on CBT block sizing. Did that change with v5?
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Large VIB file on a small static server
No, AFAIK this has always been the same. Remember, CBT has nothing to do with VMFS block size. If you have a VMFS5 datastore using 1mb block size, this does not have any influence on CBT. The block always starts with 64KB and the bigger the VMDK becomes the bigger the blocks become, I've never found any table showing how this value grows...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veeam ProPartner
- Posts: 71
- Liked: 12 times
- Joined: Jun 11, 2012 12:04 pm
- Contact:
[MERGED] Design of DR,couple issues
Hello,
I am in the middle of a Design for a Customer of mine and basically i want to DR the Virtual Machines to a Cloud Provider.
The issues of course are that if there are many changes it will cost a lot because of bandwidth.
I want to be sure that i am not missing anything.
when i look at the Backup Reports of the Veeam and on the VIB files it looks like 80GB of changes are every day. it means that every week it will be 0.5TB to replicate which
is a lot i think .
The environment currently has 3 ESXi Hosts with 15 Virtual Machines , Exchange,Domain Controller,File Server,Monitor etc.
They are 2 Jobs, 2003 Job & 2008 Job.
Each Job Runs 2 Times a Days and has 60 Restore Points so basically 1 Month Backup.
i put below the latest Reports and would like to hear your opinion on if maybe i am missing somthing or it really fills up the storage every day somthing like 60-80 GB.
thank you.
http://imageshack.us/photo/my-images/208/fsfiles.jpg
http://imageshack.us/photo/my-images/16/84928103.jpg
http://imageshack.us/photo/my-images/17/25332224.jpg
http://img687.imageshack.us/img687/7109/33445029.jpg
http://img833.imageshack.us/img833/7233/77005279.jpg
I am in the middle of a Design for a Customer of mine and basically i want to DR the Virtual Machines to a Cloud Provider.
The issues of course are that if there are many changes it will cost a lot because of bandwidth.
I want to be sure that i am not missing anything.
when i look at the Backup Reports of the Veeam and on the VIB files it looks like 80GB of changes are every day. it means that every week it will be 0.5TB to replicate which
is a lot i think .
The environment currently has 3 ESXi Hosts with 15 Virtual Machines , Exchange,Domain Controller,File Server,Monitor etc.
They are 2 Jobs, 2003 Job & 2008 Job.
Each Job Runs 2 Times a Days and has 60 Restore Points so basically 1 Month Backup.
i put below the latest Reports and would like to hear your opinion on if maybe i am missing somthing or it really fills up the storage every day somthing like 60-80 GB.
thank you.
http://imageshack.us/photo/my-images/208/fsfiles.jpg
http://imageshack.us/photo/my-images/16/84928103.jpg
http://imageshack.us/photo/my-images/17/25332224.jpg
http://img687.imageshack.us/img687/7109/33445029.jpg
http://img833.imageshack.us/img833/7233/77005279.jpg
Veeam FAN
@shatztal
Blog: http://www.terasky.com
@shatztal
Blog: http://www.terasky.com
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Large VIB file on a small static server
Hello skydok, please look through this topic for the most common reasons of large incremental files. Additionally, try setting the storage optimization setting to "WAN target", should reduce the size of the VIB files.
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Apr 19, 2011 3:55 pm
- Full Name: Andrew Bradley II
- Contact:
[MERGED] CBT Question
We are just getting started with Veeam and I am seeing some backup jobs with huge nightly CBT values.
There are about 10% of our VM's that have multi-gigabyte nightly backups with CBT enabled. When I look at their statistics it shows that CBT is working but it is backing up about 2-9gb of changes each night.
Is there any way to determine what CBT is actually tracking or any tools to look at it it detail?
What I think maybe happening is there are either log files or some caching that is being identified in the CBT log. Once I can identify what is causing these large changes I then want to remove those files/folders from the backup job(s).
Anyone else see this in their environment and how did you deal with it?
Thanks,
Andy
abradley@reawire.com
There are about 10% of our VM's that have multi-gigabyte nightly backups with CBT enabled. When I look at their statistics it shows that CBT is working but it is backing up about 2-9gb of changes each night.
Is there any way to determine what CBT is actually tracking or any tools to look at it it detail?
What I think maybe happening is there are either log files or some caching that is being identified in the CBT log. Once I can identify what is causing these large changes I then want to remove those files/folders from the backup job(s).
Anyone else see this in their environment and how did you deal with it?
Thanks,
Andy
abradley@reawire.com
-
- Influencer
- Posts: 17
- Liked: 1 time
- Joined: Feb 13, 2013 5:36 pm
- Full Name: Daniel Negru
- Contact:
[MERGED] incremental backups massive reads
Hi,
Maybe this has been covered in some other posts, I apologise.
I have 2 (new) large disks guests and I am puzzled why consisstently the backup or replication reads ~30GB of data from each large disks. Alwyas the same size. This are users files, not likely to be as many changes, not in an hour or 2. Even if launched one after another, still it reads a total of 100GB but manages somehow to do an amazing 100:1 compression.
Is anyone having this kind of problems?
I remember Backup Exec us to have 2-10 GB incrementals on the same data over a day worth of it.
I am thinking the CBT may be acting up on these new installs and I may need to rest it. I will probably try it in the weekend but I wonder if I can avoid it or if will be of any use...
Guests are win2k8r2, B&R is 6.5 against vCenter / ESXi 5.1
Thank you,
Daniel.
Maybe this has been covered in some other posts, I apologise.
I have 2 (new) large disks guests and I am puzzled why consisstently the backup or replication reads ~30GB of data from each large disks. Alwyas the same size. This are users files, not likely to be as many changes, not in an hour or 2. Even if launched one after another, still it reads a total of 100GB but manages somehow to do an amazing 100:1 compression.
Is anyone having this kind of problems?
I remember Backup Exec us to have 2-10 GB incrementals on the same data over a day worth of it.
I am thinking the CBT may be acting up on these new installs and I may need to rest it. I will probably try it in the weekend but I wonder if I can avoid it or if will be of any use...
Guests are win2k8r2, B&R is 6.5 against vCenter / ESXi 5.1
Thank you,
Daniel.
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Large VIB file on a small static server
Hi Daniel,
Please review this topic for possible reasons of these large increments for your VMs. If you still have any questions, please let us know.
Thanks!
Please review this topic for possible reasons of these large increments for your VMs. If you still have any questions, please let us know.
Thanks!
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: Large VIB file on a small static server
So i've read through this thread to see if there is some good info on decreasing backup size and became quite confused.
I see people recommending to do defrag on the servers being backed up, yet i'm failing to see how is that of the benefit. Let's say a large fragmented file is defragged - system moves blocks to be "sequential". Next backup is going to balloon because blocks have changed places and were picked up by CBT. The next day's backup will be same as always as only other normally changed data has changed until the system is defragged again. So where is the reduction in backup size comes from?
Furthermore, if defrag is ran constantly - backups will be huge all the time. So it seems that most would want to disable defrag completely, not enable it on regular basis.
What am i missing?
I see people recommending to do defrag on the servers being backed up, yet i'm failing to see how is that of the benefit. Let's say a large fragmented file is defragged - system moves blocks to be "sequential". Next backup is going to balloon because blocks have changed places and were picked up by CBT. The next day's backup will be same as always as only other normally changed data has changed until the system is defragged again. So where is the reduction in backup size comes from?
Furthermore, if defrag is ran constantly - backups will be huge all the time. So it seems that most would want to disable defrag completely, not enable it on regular basis.
What am i missing?
-
- Enthusiast
- Posts: 96
- Liked: 16 times
- Joined: Feb 17, 2012 6:02 am
- Full Name: Gav
- Contact:
Re: Large VIB file on a small static server
I think that the general understanding of the CBT and defrag issue is like this:
a defrag will help keep your backup sizes down because it will consolidate all of your files on disk and result in them using up the fewest possible number of 'blocks'.
depending on your block size setup, this will have different results.
Say you had one file that was badly fragmented across your disk, broken up into 10,000 fragments and as a result (as an example) was then spread over 10,000 blocks. If after a defrag that file was then consolidated up into 5,000 blocks then you have halved the number of blocks that this one file is spread across.
If this one file is then changed completely.....CBT will only detect 5,000 blosks have changed and not 10,000......resulting in a smaller backup file.
now apply that to your hundreds of thousands of files on disk and you can see where the savings come into play.
I have been a part of this post and have seen improvements in my backup sizes by following regular defrags.
a defrag will help keep your backup sizes down because it will consolidate all of your files on disk and result in them using up the fewest possible number of 'blocks'.
depending on your block size setup, this will have different results.
Say you had one file that was badly fragmented across your disk, broken up into 10,000 fragments and as a result (as an example) was then spread over 10,000 blocks. If after a defrag that file was then consolidated up into 5,000 blocks then you have halved the number of blocks that this one file is spread across.
If this one file is then changed completely.....CBT will only detect 5,000 blosks have changed and not 10,000......resulting in a smaller backup file.
now apply that to your hundreds of thousands of files on disk and you can see where the savings come into play.
I have been a part of this post and have seen improvements in my backup sizes by following regular defrags.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Large VIB file on a small static server
Certainly you don't want to run defrag all the time, but running defrag immediately prior to running an active full backup can be quite useful since it creates contiguous free space which means that new data has a tendency to be placed into those contiguous blocks. It's important to understand that Veeam blocks are much larger than the filesystem blocks so a fragmented filesystem means that smaller changed blocks will be spread throughout the filesystem, meaning more changed blocks. Also, good defrags will also defrag the MFT meaning that directory changes and metadata updates will be confined to a smaller number of contiguous blocks.
On the other hand, if you never run active full backups then running a defrag will create a huge incremental at some point and would generally not be suggested.
On the other hand, if you never run active full backups then running a defrag will create a huge incremental at some point and would generally not be suggested.
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: Large VIB file on a small static server
But for those running reverse incremental only (like us) doing a defrag on 10TB file server with 4TB of data (and replicating,backing up over WAN) defrag would be suicide, right? So this recommendation only applies to those doing full backups periodically (and even then, only active fulls?)
-
- Enthusiast
- Posts: 96
- Liked: 16 times
- Joined: Feb 17, 2012 6:02 am
- Full Name: Gav
- Contact:
Re: Large VIB file on a small static server
That is right.
Like Tom said - you would only do the defrag right before a new FULL backup (i.e. not an incremental). We only run a defrag in that manner - right before the next FULL.
If your using reverse incrementals - depending on how badly your files are fragmented, your right, it could come close to being like pushing a FULL backup down your WAN pipe.
Like Tom said - you would only do the defrag right before a new FULL backup (i.e. not an incremental). We only run a defrag in that manner - right before the next FULL.
If your using reverse incrementals - depending on how badly your files are fragmented, your right, it could come close to being like pushing a FULL backup down your WAN pipe.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Jun 28, 2011 11:27 pm
- Full Name: Jim D
- Contact:
Re: Large VIB file on a small static server
To follow up on a comment in this thread from almost a year ago, someone mentioned:
Thanks,
Jim
I'm testing right now on moving from backing up a small part of my environment using Veeam to using Veeam for everything. I am seeing almost all servers work great, but 3 of them are not quite as great. My average server, on the second time it is backed up using a reverse incremental shows something similar to: "Hard Disk 1 (60GB) 1.3 GB read" - as an example of average. But 3 of my servers are more like "Hard Disk 1 (65.5 GB) 28.2 GB read" or one server says "(150 GB) 149.8 GB read". I'm accounting for the changes that I expect the drives should be doing based on the function, but all 3 seem high all things considered. However, the thing that all 3 trouble children have in common is that they are all 2003R2 server and were all created by P2V (if my memory servers correct, they are the only ones like that in my environment - so this seems pretty significant). So in case this is my issue - what do I do to fix that? Can anyone point me to some resources to learn more about getting a proper stripe alignment on the VMs? I can certainly do a defrag as they haven't had that in a long time probably. But to deal with the alignment I'm wondering about simply migrating them to another datastore and back again... does the migration process align it as it moves (ESXi 5.1)? If not, is it possible to do something about it?P2V'd servers, and especially P2V'd 2003 servers typically are not properly stripe aligned which can causes anywhere from 20-30% additional changed blocks.
Thanks,
Jim
-
- Product Manager
- Posts: 20413
- Liked: 2301 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Large VIB file on a small static server
This definitely illustrates that CBT doesn’t seem to work for some reason and Veeam B&R has to re-read full VM image."(150 GB) 149.8 GB read".
Does this machine have an obsolete snapshot that prevents CBR from being enabled? Or is it being used as a proxy server?
Thanks.
-
- Product Manager
- Posts: 20413
- Liked: 2301 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Large VIB file on a small static server
In fact, you can check whether your disks are aligned within windows OS:
1. Start-> Run -> msinfo32.exe (System Information).
2. Components -> Storage -> Disks.
3. Scroll to the Partition Starting Offset information.
4. Catch this number and divide it by 4096.
5. If it’s perfectly divisible, then there is nothing to worry about. Otherwise, if it’s not - for instance, 32,256 (32,256 / 4096 = 7.875), it means that this file system is not correctly aligned.
Additionally, there is decent free tool called UBERAlign that is likely to help you to align your disks.
Hope this helps.
Thanks.
1. Start-> Run -> msinfo32.exe (System Information).
2. Components -> Storage -> Disks.
3. Scroll to the Partition Starting Offset information.
4. Catch this number and divide it by 4096.
5. If it’s perfectly divisible, then there is nothing to worry about. Otherwise, if it’s not - for instance, 32,256 (32,256 / 4096 = 7.875), it means that this file system is not correctly aligned.
Additionally, there is decent free tool called UBERAlign that is likely to help you to align your disks.
Hope this helps.
Thanks.
Who is online
Users browsing this forum: No registered users and 158 guests