We recently upgraded our VMware environment to 6.0. It didn't go... well
Anyway the result was a new Vcentre from scratch. I removed the VMs from the old vcentre, reimported them in to the same backup job form the new vcente, used the map to backup feature.
However it didn't, carry on the chain, and started a new one for all the VMs. Not a massive problem at the moment as we have spare space and can qauit for the old chains to expire. However our fileservers backup has bcome 17TB, leaving 8TB free on the drive. I get this error :
[i][color=#000080]"Processing ***** Error: Not enough storage is available to process this command. Failed to write data to the file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk]. Failed to download disk. Shared memory connection was closed. Failed to upload disk. Agent failed to process method {DataTransfer.SyncDisk}.
Backup file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk] size exceeds 15 TB. Make sure backup repository's file system supports extra large files."
[/color][/i]
I read somewhere about StgOversizeGbLimit (DWORD), but no idea what to set the dword too, or is this indeed the fix?
06/08/2015 00:04:46 :: Queued for processing at 06/08/2015 00:04:46
06/08/2015 00:04:47 :: Required backup infrastructure resources have been assigned
06/08/2015 03:19:12 :: VM processing started at 06/08/2015 03:19:12
06/08/2015 03:19:12 :: VM size: 3.5 TB (2.5 TB used)
06/08/2015 03:19:13 :: Getting VM info from vSphere
06/08/2015 03:19:20 :: Inventorying guest system
06/08/2015 03:19:23 :: Preparing guest for hot backup
06/08/2015 03:19:53 :: Creating snapshot
06/08/2015 03:20:08 :: Releasing guest
06/08/2015 03:20:38 :: Indexing guest file system
06/08/2015 03:20:58 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.vmx
06/08/2015 03:21:09 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.vmxf
06/08/2015 03:21:13 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.nvram
06/08/2015 03:21:20 :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [san]
06/08/2015 03:21:28 :: Hard disk 1 (80.0 GB) 80.0 GB read at 76 MB/s [CBT]
06/08/2015 03:40:43 :: Using backup proxy VMware Backup Proxy for disk Hard disk 2 [san]
06/08/2015 03:40:58 :: Hard disk 2 (400.0 GB) 400.0 GB read at 161 MB/s [CBT]
06/08/2015 04:24:29 :: Using backup proxy VMware Backup Proxy for disk Hard disk 3 [san]
06/08/2015 04:24:44 :: Hard disk 3 (30.0 GB) 23.5 GB read at 37 MB/s [CBT]
06/08/2015 04:27:55 :: Using backup proxy VMware Backup Proxy for disk Hard disk 4 [san]
06/08/2015 04:28:18 :: Hard disk 4 (1.0 TB) 296.5 GB read at 34 MB/s [CBT]
06/08/2015 04:36:51 :: Using backup proxy VMware Backup Proxy for disk Hard disk 5 [san]
06/08/2015 04:37:09 :: Hard disk 5 (1.0 TB) 253.0 GB read at 31 MB/s [CBT]
06/08/2015 04:44:33 :: Using backup proxy VMware Backup Proxy for disk Hard disk 6 [san]
06/08/2015 04:44:47 :: Hard disk 6 (1.0 TB) 196.0 GB read at 25 MB/s [CBT]
06/08/2015 05:00:33 :: Getting list of guest file system local users
06/08/2015 06:57:29 :: Removing VM snapshot
06/08/2015 06:59:00 :: Error: Not enough storage is available to process this command.
Failed to write data to the file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk].
Failed to download disk.
Shared memory connection was closed.
Failed to upload disk.
Agent failed to process method {DataTransfer.SyncDisk}.
NTFS filesystem file size limit is 16Tb. The only option would be to point that job to some other volume using another file system that allows >16Tb files.
I suppose I need to be looking at upgrading to Windows Server 2012, however I presume all my local disks will be formatting again ?, which obviously is a problem with back up sets totalling 40 TB over my 3 repositories,
There is also an important thing to mention: when VMs get moved to another vCenter its morefID changes, therefore the VM is treated by backup server as a new one, thus incremental run will takes as much space as full. Did the job failed with the error mentioned right after mapping or not? if so then please try to run active full for that job and see if it results in a smaller .vbk
More info on migration to another vCenter you can find here.
Exactly what PTide said happened, which saw the in increase in the backup file sizes. But I did remove all the VMs from the old vcentre and read them through the new Vcentre.
Eventually wont the old legacy VMid VMs be deleted (actually I think its keeps a certain member of restore points so maybe not?) ? I did do the map to job feature. Ill run an active full again but I already run reverse incrementals, will this make a difference?
Thanks
Mike
As you can see, this is the third time I've had to do this and its getting in a state, will a full Active do anything :
The old VMs should be deleted according to the deleted VM retention option in the job settings. It is off by default and you can specify the number of days, not restore points, after which it will remove the old chains.
mdmd wrote:But I did remove all the VMs from the old vcentre and read them through the new Vcentre
It's not about the vCentre. Your RP already contains backup of an "old" VM. When you move vm and run an increment, then in considers vm with a new id to contain a brand new data --> your "new" vm gets fully backed up.
mdmd wrote:Eventually wont the old legacy VMid VMs be deleted (actually I think its keeps a certain member of restore points so maybe not?)
I did do the map to job feature. Ill run an active full again but I already run reverse incrementals, will this make a difference?
In case of reverse incremental your "old id" VM's data will stay in your full forever, because it never gets changed, unless you set a deleted vm retention, check this thread please. If you run active full now it will backup only data which is currently present (your "new" vm) and start a new chain.
So I set the advanced settings VM retention to 7 days and its deleted them all now. Ran a full active of all servers, it took a long time (3 days) but completed.
The only problem now is, on the next load of backups, roughly 50% of my VMs are getting :
:: CBT data is invalid, failing over to legacy incremental backup. No action is required, next job run should start using CBT again. If CBT data remains invalid, follow KB1113 to perform CBT reset. Usual cause is power loss.
If I understood and read this correctly, I checked KB1113 and did the following:
1.Power VM off
2. Change options in config options
3. delete CBT files
4. power VM on
I did this for all my servers, and reran job. Servers are still coming up with :
12/08/2015 16:45:07 :: CBT data is invalid, failing over to legacy incremental backup. No action is required, next job run should start using CBT again. If CBT data remains invalid, follow KB1113 to perform CBT reset. Usual cause is power loss.