Comprehensive data protection for all workloads
Post Reply
bpayne
Enthusiast
Posts: 55
Liked: 12 times
Joined: Jan 20, 2015 2:07 pm
Full Name: Brandon Payne
Contact:

Full merge failing on Data Domain 2500

Post by bpayne »

Full merge is failing on almost all of my jobs after switching to the new "Use-per vm backup files" feature on my repository. This new setting has been amazing with backup times, but now I can't get any backup jobs to finish full merge.

Backup Jobs: Most backup jobs have almost 200 vm's each. (we have almost 2000 vm's)
Data Domain 2500, fully populated.
Restore Points: 35
I have the [ ] Limit maximum concurrent tasks UNCHECKED. B/c we are seeing amazing performance improvements.

Error
--------------

Code: Select all

3/7/2016 11:00:08 PM :: Error: Failed to connect to storage. Maximum number of opened connections cannot exceed 60. Storage: DD2, user: ddboost
Failed to restore file from local backup. VFS link: [summary.xml]. Target file: [MemFs://frontend::CDataTransferCommandSet::RestoreText_{fa37caf9-88c3-4d6d-835c-3b7ef69f430f}]. CHMOD mask: [0].
Agent failed to process method {DataTransfer.RestoreText}.

I opened up an SR 01719576. And support of course recommended setting the number below 60, as stated in the error messages. I did this, Full Merge still fails. Support came back and said to set the max concurrent tasks to 2. I tried that, still fails. Just for giggles, I set it to 1, still fails again.

Error message is now
-----------------------------

Code: Select all

3/8/2016 8:39:01 AM :: Error: memory no longer available. Err: 5001
Failed to write data to the file '[DD2] thp0555:/SAH_35D_6PM/SAH_35D_6PM.vbm_3_tmp ( 78696451 )'. Offset: '31457280'.
--tr:Failed to call DoRpc. CmdName: [DDBoost FcWriteFile] inParam: [<InputArguments><Link value="ddboost://DD2:thp0555@/SAH_35D_6PM/SAH_35D_6PM.vbm_3_tmp" /><Flags value="258" /><Mode value="511" /><Offset value="31457280" /><BytesToWrite value="1048576" /></InputArguments>].
memory no longer available. Err: 5001
Failed to write data to the file  

All this aside, setting this to 2 is almost unacceptable for us b/c we have so many VM's. So there is some other underlying issue and support is a bit slow on this.

Looking for ideas/suggestions from the Veeam community while I wait.

HELP! My backup storage is filling up FAST!


EDIT - checked settings in post to get notified when someone responds
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Full merge failing on Data Domain 2500

Post by tsightler » 3 people like this post

There is a known issue that matches exactly this description. Please ask your support engineer to review Bug 67525 as I'm quite sure there is a hotfix available to address this issue (>64 VMs in a job with per-VM using DDboost causes merge operations to fail with maximum opened connection count > 60).

You should not have to set your concurrent count to 2, or anything so low, however, I would not recommend unlimited either, although it's probably safe overall since you will naturally be limited by the amount of tasks your proxies can support. Normally Veeam recommends setting the number of tasks to 50% of the max total streams supported by your storage system which you can find in the EMC documentaiton for DDboost. For a DD2500 I believe that the max streams supported is 180, so a value of 90 is safe. Assuming that the box is 100% dedicated to Veeam backups, you may be able to exceed this value up to the maximum value for your storage system, but remember that streams are needed for restores and replication and other use cases so your results may vary. 50% of the write streams is normally enough to provide excellent throughput per-VM while not bottle necking the target in case other operations occur together.
bpayne
Enthusiast
Posts: 55
Liked: 12 times
Joined: Jan 20, 2015 2:07 pm
Full Name: Brandon Payne
Contact:

Re: Full merge failing on Data Domain 2500

Post by bpayne »

Thank you so much, that is a relief. I will update my SR and hopefully there is a hotfix I can get applied today. I will update this post later with my findings.
bpayne
Enthusiast
Posts: 55
Liked: 12 times
Joined: Jan 20, 2015 2:07 pm
Full Name: Brandon Payne
Contact:

Re: Full merge failing on Data Domain 2500

Post by bpayne »

I was able to get the hotfix you mentioned. I applied it and now my full merges are completing successfully. Thank you!
will4king
Lurker
Posts: 1
Liked: never
Joined: Nov 21, 2016 10:29 am
Full Name: Wilbert Bertolini
Contact:

Re: Full merge failing on Data Domain 2500

Post by will4king »

Hi bpayne,

Where i can find this hotfix?
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Full merge failing on Data Domain 2500

Post by foggy »

Wilbert, please contact technical support to get it.
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Re: Full merge failing on Data Domain 2500

Post by rntguy »

Is this hotfix for the 2500 only, or the DDvE 3.x would use it too? Or is it a Veeam bug? I'm on 9.5, latest U package. The customer doing cloud backup copy jobs has only 8 VMs per job but many jobs. The one that this happened most recently on has only 1 VM in it.
nefes
Veeam Software
Posts: 643
Liked: 162 times
Joined: Dec 10, 2012 8:44 am
Full Name: Nikita Efes
Contact:

Re: Full merge failing on Data Domain 2500

Post by nefes »

Fix is already in 9.5, so you are facing different problem. It is worth to work with support on it.
rntguy
Enthusiast
Posts: 82
Liked: 1 time
Joined: Jan 29, 2016 8:31 pm
Full Name: Rnt Guy
Contact:

Re: Full merge failing on Data Domain 2500

Post by rntguy »

thanks.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 135 guests