Comprehensive data protection for all workloads
Post Reply
Morgenstern72
Enthusiast
Posts: 76
Liked: 12 times
Joined: Jan 30, 2014 3:37 pm
Full Name: Joachim
Contact:

Big fileserver cannot be backed up - Case #00909955

Post by Morgenstern72 » May 13, 2015 7:48 am

Since the support tries to help, but fails for a solution so far I ask the community for ideas.

VMWare
ESX 5.5, Datastore on a HP 3PAR with 4x8GB FC (all paths active)

Backup Proxy
Windows 2008 R2 Server, 2 XEON CPUs, 64GB RAM
Datastores direct SAN attached with 2x4GB FC (all paths active)

Repository, 82TB (30TB free)
on Backup Proxy, 4x4GB attached (2 paths active)

We have a Windows 2008 R2 fileserver with 2x13.5TB disks as VM**. It's set to incremental forever, a full backup every month and backup files >16TB
The last successful backup generated 16.976.198.116kb
The backup did work for some time but when we increased the disks we started to get problems in veeam.

ERROR
Additional Details: 15.04.2015 15:49:11 :: Error: Not enough storage is available to process this command. Failed to write data to the file [V:\Backups\Fileserver\Fileserver2015-04-14T164737.vbk]. Failed to download disk. An existing connection was forcibly closed by the remote host Failed to upload disk. Agent failed to process method {DataTransfer.SyncDisk}.

Support
We were presented with some quite bogus solutions, like hot fixes for windows 2003. But no substantial help (fort time with veeam support so far).

Workaround 1
Nothing did help until I discovered myself that at the end of the backup only ~1GB RAM was available. So we upgraded to 64GB and it worked.

Error again
Well, it worked for about two weeks, then the same error started again. At the end of the backup >20GB of RAM were available. The server was online all the time, no ping lost.

Support
I opened a new case referring to the old one. No solution so far, but I get asked all the same questions again.

Things that I tried on my own but did not work
https://support.microsoft.com/en-us/kb/967351
https://www.microsoft.com/en-us/downloa ... px?id=9258
Set the Cache Size with powershell: http://serverfault.com/questions/325277 ... 466#527466

Workaround 2
Since the support did not come up with any solution and I need a backup of the VM I splitted the backup in two parts: OS and data disk 1 as one backup, data disk 2 as a second chained backup. This does work, BUT
Limitation 1: Failed to index guest file system. VSSControl: Index failed on the second data disk
Limitation 2: The backup with 2 disks runs at >300MB/sec. The backups with single disks run at ~160MB/sec
Limitation 3: Restore. I did not test this so far.

Solution?
It looks like that veeam cannot be used to backup big vmdks on a 2008 R2 repo. The problem is how veeam writes it backups file: one big file. This has to lead to some problems at the end of the day. If you think otherwise please feel welcome to share your ideas. At the moment I am really unhappy with veeam and their support since day after day the only thing the support does is to request the same information over and over.

If Windows is the problem than veeam has to point that out clearly that you cannot use windows servers as a repo for big file sizes. But since you need them for direct SAN attach (does not work with linux servers, or am I wrong?) and then you do not want to transfer your data over slow ethernet, I do not see how to handle this in any other way. It makes no sense to attach datastores and repo with FC and then use ethernet to connect them.

It would be no problem at all if veeam B&R just splits it's backup files since they already seem to know that windows servers have problems with such big files.

** before you tell me this is too big: this is our small document server. Our real data is on a 900TB beeGFS (Fraunhofer) backed up by TSM on 400 LT6. Welcome to the world of scientific research :)

Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Big fileserver cannot be backed up - Case #00909955

Post by Yuki » May 14, 2015 10:58 pm

I don't have a solution, unfortunately, but think that it would be a good idea to either split backup files

Not bad... Our largest managed data-store is 200TB NetApp that is backed up onto 50 or so tapes. Would be great to use Veeam there as well, but they are actually using it as a filer and not just VMDK storage. Also scientific (bio-tech) research environment.

alanbolte
Expert
Posts: 635
Liked: 172 times
Joined: Jun 18, 2012 8:58 pm
Full Name: Alan Bolte
Contact:

Re: Big fileserver cannot be backed up - Case #00909955

Post by alanbolte » May 15, 2015 4:29 am

Would you be able to use a Windows 2012 machine as the proxy/repository instead of 2008r2? That has a file size limit of 256 TB when using 64K clusters, up from 16TB in previous editions of Windows. See MaximumFileSize in this article.

If you are not satisfied with the quality of support provided by your case owner, you can request that they escalate, or use the Talk to a Manager link in the customer portal.

Morgenstern72
Enthusiast
Posts: 76
Liked: 12 times
Joined: Jan 30, 2014 3:37 pm
Full Name: Joachim
Contact:

Re: Big fileserver cannot be backed up - Case #00909955

Post by Morgenstern72 » May 15, 2015 7:20 am

alanbolte wrote:Would you be able to use a Windows 2012 machine as the proxy/repository instead of 2008r2? That has a file size limit of 256 TB when using 64K clusters, up from 16TB in previous editions of Windows. See MaximumFileSize in this article.
Only today I got an answer from veeam telling me
"Maximum file size for NTFS with default block size is 16TB. During the backup job file could grow over the 16 Tb. Could you please check maximum file size supported by file system and change cluster size if it necessary:"

Before I was asked to format the partition with /L (http://www.veeam.com/kb1893) and 64k what I have done.

So all these information were just wrong from veeam?
It's not so easy to upgrade this server to 2012 R2 since it is quite old and not in the support matrix. But I find it quite disappointing from the support to let me send them logs after logs after logs and request all the same infor over and over again and then it's just a file size limitation?

Morgenstern72
Enthusiast
Posts: 76
Liked: 12 times
Joined: Jan 30, 2014 3:37 pm
Full Name: Joachim
Contact:

Re: Big fileserver cannot be backed up - Case #00909955

Post by Morgenstern72 » May 15, 2015 7:27 am

alanbolte wrote:If you are not satisfied with the quality of support provided by your case owner, you can request that they escalate, or use the Talk to a Manager link in the customer portal.
I normally don't do that but we had to waste an enormous time with sending logs and implementing fixes just then to get the answer from a user in a forum that what we try to achieve is not possible with Server 2008 R2 - and that is very very disappointing :(

So: escalated.

Post Reply

Who is online

Users browsing this forum: kevin.ridings, tsightler and 63 guests