Comprehensive data protection for all workloads
Post Reply
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

15TB file size error

Post by mdmd »

We recently upgraded our VMware environment to 6.0. It didn't go... well

Anyway the result was a new Vcentre from scratch. I removed the VMs from the old vcentre, reimported them in to the same backup job form the new vcente, used the map to backup feature.

However it didn't, carry on the chain, and started a new one for all the VMs. Not a massive problem at the moment as we have spare space and can qauit for the old chains to expire. However our fileservers backup has bcome 17TB, leaving 8TB free on the drive. I get this error :

Code: Select all

[i][color=#000080]"Processing ***** Error: Not enough storage is available to process this command. Failed to write data to the file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk]. Failed to download disk. Shared memory connection was closed. Failed to upload disk. Agent failed to process method {DataTransfer.SyncDisk}. 
Backup file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk] size exceeds 15 TB. Make sure backup repository's file system supports extra large files."
[/color][/i]
I read somewhere about StgOversizeGbLimit (DWORD), but no idea what to set the dword too, or is this indeed the fix?

Thanks

Mike
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide »

Hi,

Please give us some details - what kind of repo do you use?

Thank you.
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

This resposity is a HP Storageworks, direct attached storage. The size is 27TB and 8TB free.

Veeam B&R is on a Windows 2008 R2 server

The drive is NTFS formatted.

Code: Select all

NTFS Volume Serial Number :       0xe644836344833579
Version :                         3.1
Number Sectors :                  0x0000000da51837ff
Total Clusters :                  0x00000000da51837f
Free Clusters  :                  0x000000003f7a0776
Total Reserved :                  0x0000000000000000
Bytes Per Sector  :               512
Bytes Per Physical Sector :       <Not Supported>
Bytes Per Cluster :               8192
Bytes Per FileRecord Segment    : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x00000000003c0000
Mft Start Lcn  :                  0x0000000000060000
Mft2 Start Lcn :                  0x0000000000000001
Mft Zone Start :                  0x00000000000601e0
Mft Zone End   :                  0x0000000000066560
RM Identifier:        705DD152-6811-11E3-97E1-2C59E542E4E3
Thanks
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide »

mdmd wrote:This resposity is a HP Storageworks, direct attached storage.
Could you specify model please?

Just to confirm - you're using physical backup server, correct? What backup mode do you use (NBD, direct SAN, virtual appliance)?

Thank you.
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

Yes physical backup server

HP ProLiant DL380e G8 with 2 x HP Storageworks D2600 enclosures attached.

We also have the 10GB module which is running our backups.

The backup mode we have is automatic, but for that particular job :

Code: Select all

06/08/2015 00:04:46 :: Queued for processing at 06/08/2015 00:04:46 
06/08/2015 00:04:47 :: Required backup infrastructure resources have been assigned 
06/08/2015 03:19:12 :: VM processing started at 06/08/2015 03:19:12 
06/08/2015 03:19:12 :: VM size: 3.5 TB (2.5 TB used) 
06/08/2015 03:19:13 :: Getting VM info from vSphere 
06/08/2015 03:19:20 :: Inventorying guest system 
06/08/2015 03:19:23 :: Preparing guest for hot backup 
06/08/2015 03:19:53 :: Creating snapshot 
06/08/2015 03:20:08 :: Releasing guest 
06/08/2015 03:20:38 :: Indexing guest file system 
06/08/2015 03:20:58 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.vmx 
06/08/2015 03:21:09 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.vmxf 
06/08/2015 03:21:13 :: Saving [bc_p4k_sas03_vmfs03_ao] BC-FS-003_1/BC-FS-003.nvram 
06/08/2015 03:21:20 :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [san] 
06/08/2015 03:21:28 :: Hard disk 1 (80.0 GB) 80.0 GB read at 76 MB/s [CBT]
06/08/2015 03:40:43 :: Using backup proxy VMware Backup Proxy for disk Hard disk 2 [san] 
06/08/2015 03:40:58 :: Hard disk 2 (400.0 GB) 400.0 GB read at 161 MB/s [CBT]
06/08/2015 04:24:29 :: Using backup proxy VMware Backup Proxy for disk Hard disk 3 [san] 
06/08/2015 04:24:44 :: Hard disk 3 (30.0 GB) 23.5 GB read at 37 MB/s [CBT]
06/08/2015 04:27:55 :: Using backup proxy VMware Backup Proxy for disk Hard disk 4 [san] 
06/08/2015 04:28:18 :: Hard disk 4 (1.0 TB) 296.5 GB read at 34 MB/s [CBT]
06/08/2015 04:36:51 :: Using backup proxy VMware Backup Proxy for disk Hard disk 5 [san] 
06/08/2015 04:37:09 :: Hard disk 5 (1.0 TB) 253.0 GB read at 31 MB/s [CBT]
06/08/2015 04:44:33 :: Using backup proxy VMware Backup Proxy for disk Hard disk 6 [san] 
06/08/2015 04:44:47 :: Hard disk 6 (1.0 TB) 196.0 GB read at 25 MB/s [CBT]
06/08/2015 05:00:33 :: Getting list of guest file system local users 
06/08/2015 06:57:29 :: Removing VM snapshot 
06/08/2015 06:59:00 :: Error: Not enough storage is available to process this command.
Failed to write data to the file [F:\Backups\File Servers\File Servers2015-08-06T000248.vbk].
Failed to download disk.
Shared memory connection was closed.
Failed to upload disk.
Agent failed to process method {DataTransfer.SyncDisk}.
06/08/2015 06:59:00 :: Busy: Source 84% > Proxy 58% > Network 41% > Target 1%
06/08/2015 06:59:00 :: Primary bottleneck: Source
06/08/2015 06:59:00 :: Network traffic verification detected no corrupted blocks
06/08/2015 06:59:00 :: Processing finished with errors at 06/08/2015 06:59:00

Thanks
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

PS. All the other backup jobs in the file servers job complete fine, even after this one failed.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide »

NTFS filesystem file size limit is 16Tb. The only option would be to point that job to some other volume using another file system that allows >16Tb files.

P.S. You might want to check this thread.

Thank you.
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

Thank you.

I suppose I need to be looking at upgrading to Windows Server 2012, however I presume all my local disks will be formatting again ?, which obviously is a problem with back up sets totalling 40 TB over my 3 repositories,
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: 15TB file size error

Post by Gostev »

Another option is re-formatting with larger cluster size, but this is still re-formatting...
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide » 1 person likes this post

There is also an important thing to mention: when VMs get moved to another vCenter its morefID changes, therefore the VM is treated by backup server as a new one, thus incremental run will takes as much space as full. Did the job failed with the error mentioned right after mapping or not? if so then please try to run active full for that job and see if it results in a smaller .vbk

More info on migration to another vCenter you can find here.

Thank you.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: 15TB file size error

Post by Gostev »

Good point.
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

Exactly what PTide said happened, which saw the in increase in the backup file sizes. But I did remove all the VMs from the old vcentre and read them through the new Vcentre.

Eventually wont the old legacy VMid VMs be deleted (actually I think its keeps a certain member of restore points so maybe not?) ? I did do the map to job feature. Ill run an active full again but I already run reverse incrementals, will this make a difference?

Thanks

Mike

As you can see, this is the third time I've had to do this and its getting in a state, will a full Active do anything :

Image
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: 15TB file size error

Post by VladV »

The old VMs should be deleted according to the deleted VM retention option in the job settings. It is off by default and you can specify the number of days, not restore points, after which it will remove the old chains.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide »

mdmd wrote:But I did remove all the VMs from the old vcentre and read them through the new Vcentre
It's not about the vCentre. Your RP already contains backup of an "old" VM. When you move vm and run an increment, then in considers vm with a new id to contain a brand new data --> your "new" vm gets fully backed up.
mdmd wrote:Eventually wont the old legacy VMid VMs be deleted (actually I think its keeps a certain member of restore points so maybe not?)
As Vlad said, they will. Please check Storage - Advanced Settings - VM retention.
I did do the map to job feature. Ill run an active full again but I already run reverse incrementals, will this make a difference?
In case of reverse incremental your "old id" VM's data will stay in your full forever, because it never gets changed, unless you set a deleted vm retention, check this thread please. If you run active full now it will backup only data which is currently present (your "new" vm) and start a new chain.
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

HI all again,

So I set the advanced settings VM retention to 7 days and its deleted them all now. Ran a full active of all servers, it took a long time (3 days) but completed.

The only problem now is, on the next load of backups, roughly 50% of my VMs are getting :

:: CBT data is invalid, failing over to legacy incremental backup. No action is required, next job run should start using CBT again. If CBT data remains invalid, follow KB1113 to perform CBT reset. Usual cause is power loss.

If I understood and read this correctly, I checked KB1113 and did the following:
1.Power VM off
2. Change options in config options
3. delete CBT files
4. power VM on

I did this for all my servers, and reran job. Servers are still coming up with :

12/08/2015 16:45:07 :: CBT data is invalid, failing over to legacy incremental backup. No action is required, next job run should start using CBT again. If CBT data remains invalid, follow KB1113 to perform CBT reset. Usual cause is power loss.

This is taking all my backups 2+ days to run.

Any advice?

Thanks
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: 15TB file size error

Post by PTide »

Hi,

Did you check if any snapshots were present?

Also, did you wait for next run after first one after you had cleared CBT data?

Thank you.
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: 15TB file size error

Post by VladV »

Did you patch up your ESXi hosts?

There was a bug in the first versions that affected CBT so check http://kb.vmware.com/selfservice/micros ... Id=2116126
mdmd
Enthusiast
Posts: 38
Liked: 2 times
Joined: Jan 06, 2014 10:29 am
Full Name: Mike
Contact:

Re: 15TB file size error

Post by mdmd »

No snapshots present on any VM, we are quite strict about snapsnots.

Applying Patches now! It was on my to do list!

Ill let you all know I get on, thanks for the continued help!

Mike
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Semrush [Bot] and 218 guests