Host-based backup of VMware vSphere VMs.
Post Reply
pufferdude
Expert
Posts: 223
Liked: 15 times
Joined: Jul 02, 2009 8:26 pm
Full Name: Jim
Contact:

Changing optimization breaks replication?

Post by pufferdude »

I've noticed something odd with v6... If I have a perfectly-functioning replication job but then change the optimization (say, from "local target" to "LAN target"), the next time the job runs it throws this error:

"Error: Client error: Cannot get the value of a digests array element, because element index is out of array's range. Element index: [40960]. Array size: [40960]. Failed to process [saveDiskDigestsCli]."

Switching back to the original optimization then allows the job to run normally and without error. Is it the intention that one should be able to change the optimization type AFTER a job has been created and run once? If so, then there seems to be a bug. However, if the intention is to NOT allow the optimization to be changed once a job has run, then maybe the UI should be modified to not allow this?
Gostev
Chief Product Officer
Posts: 31816
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Changing optimization breaks replication?

Post by Gostev »

Previously, you could change this setting, but it was ignored for the existing replicas. Looks like a bug here is that it is no longer ignored - the job tries to apply the new setting, and obviously fails. Thanks for letting us know, I will ask our QC to reproduce.

If you'd like to leverage new block size, try creating the new replication job with this settings, and use replica mapping functionality to map into the existing replicas to avoid the full resync.
jeffpatrick50
Lurker
Posts: 1
Liked: never
Joined: Aug 26, 2011 9:27 am
Full Name: Jeff Patrick
Contact:

Re: Changing optimization breaks replication?

Post by jeffpatrick50 »

Hi

I am receiving this error but I don't think I have changed the replication mode. I have a call open with support ref: 5162815. I've struggled the last few times to get through to support on the phone.

Is there anything else that can give this error?

Jeff
MattR
Influencer
Posts: 10
Liked: never
Joined: May 23, 2011 1:33 pm
Full Name: Matt
Contact:

Re: Changing optimization breaks replication?

Post by MattR »

I have a job that is also failing after changing the optimization. I will post back if it can complete with better results. Then i will follow suggestion to create a new job and map the replicas.

Thanks,
MattR
Influencer
Posts: 10
Liked: never
Joined: May 23, 2011 1:33 pm
Full Name: Matt
Contact:

Re: Changing optimization breaks replication?

Post by MattR »

Changing back the target optimization has allowed the job to complete successfully.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Changing optimization breaks replication?

Post by Vitaliy S. »

Jeff, have you had a chance to work on this issue with our support engineers?

As Matt has correctly noted to resolve this issue you should either change block optimization to original value or recreate your replication jobs from scratch.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Changing optimization breaks replication?

Post by tsightler »

You could also just remove the metadata from the repository which will force it to be rebuilt with the new block size.
crichardson
Enthusiast
Posts: 39
Liked: never
Joined: Dec 09, 2010 1:25 pm
Full Name: Corey
Contact:

Re: Changing optimization breaks replication?

Post by crichardson »

tsightler wrote:You could also just remove the metadata from the repository which will force it to be rebuilt with the new block size.
How would one go about doing this? We have a 100mbps pipe between two offices approx. 20KM apart. I have on job which is optimized for LAN, and another for WAN. It seems (so far), that the WAN job is quicker. Of course these jobs have different VM's which could be contributing to the time it takes for replication - but I'm still curious. So I tried changing the settings from LAN to WAN and ended up getting the same error.

Error: Client error: Cannot get the value of a digests array element, because element index is out of array's range. Element index: [102400]. Array size: [102400]. Failed to process [saveDiskDigestsCli].

I could use replica mapping, but I'm just curious how I can simply purge the meta data for future reference. And does purging the metadata take longer than replica mapping or is it fairly quick to recreate?

Thanks,
Corey
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Changing optimization breaks replication?

Post by tsightler »

crichardson wrote: I could use replica mapping, but I'm just curious how I can simply purge the meta data for future reference. And does purging the metadata take longer than replica mapping or is it fairly quick to recreate?
When you setup replication you must choose a repository for metadata. If you look in this repository you will see a folder called "Replica" and then, within that, a folder for each VM that is being replicated (identified by VM-id). If you delete the VM-id folder, the metadata will be rebuild on the next run.

BTW, do you have the latest patch? I thought this was addressed so that the metadata rebuild would happen automatically when the optimization option was changed.

You won't see any major difference between this and replica mapping as it's basically the exact same process, i.e. scan the target disk to build a new digests file, then scan source and send any changes, from that point replicate as normal.
pcrebe
Enthusiast
Posts: 94
Liked: 1 time
Joined: Dec 06, 2010 10:41 pm
Full Name: CARLO
Contact:

Re: Changing optimization breaks replication?

Post by pcrebe »

tsightler wrote:
BTW, do you have the latest patch? I thought this was addressed so that the metadata rebuild would happen automatically when the optimization option was changed.
I've the latest patch but still doesn't run. I deleted the Replica/VM-id folder to have a slow succesful job.

Thanks,
Carlo
moggyman
Influencer
Posts: 11
Liked: 1 time
Joined: May 11, 2012 10:57 am
Contact:

Weird error on replication job

Post by moggyman »

[merged]

I'm trialling B&R for a DR solution. I have a job replicating 2 Windows 2003 Server VMs to another ESXi server. The plan is to put that ESXi host offsite and push the replication data across the WAN.

I've been disappointed with the amount of data the job seems to think needs replicating so I've been looking in to ways of reducing it. I noticed that the job was set to optimise itself for a LAN which is fair enough as it IS on the LAN at the moment. I thought I'd check out the alleged de-dedupe improvements etc offered by the Optimise For WAN setting - even though the host isn't actually on the WAN yet.

The job seems to truck along nicely but after a while I receive a very geeky error message and the job fails. The message doesn't really give the user much of a chance to actually find out what the issue actually is. I've pasted the error in below - can anyone tell me in normal English what B&R is moaning about? Thanks

25/05/2012 12:01:42 :: Error: Client error: Cannot get the value of a digests array element, because element index is out of array's range. Element index: [69438]. Array size: [69438].
Failed to process [saveDiskDigestsCli].
DanQcCa

Re: Changing optimization breaks replication?

Post by DanQcCa »

Problem solved with the version 6.1.0.184 (64 bit) release 4 june 2012.
I had the version 6.0.0.181 patch3 with the same error message as you :
Error: Client error: Cannot get the value of a digests array element, because element index is out of array's range. Element index: [xxxxx]. Array size: [xxxxx].
Failed to process [saveDiskDigestsCli].

I just install the new version and I can now change the Storage optimizations from Local Target to LAN target and my replication job terminate without error.

A strange thing, on the download web page they said 6.1.0.181 and once install it become 6.1.0.184! It work well so...
Gostev
Chief Product Officer
Posts: 31816
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Changing optimization breaks replication?

Post by Gostev »

DanQcCa wrote:A strange thing, on the download web page they said 6.1.0.181 and once install it become 6.1.0.184! It work well so...
Strange! Where exactly do you see 6.1.0.184?
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Changing optimization breaks replication?

Post by dellock6 »

I got 6.0.0.184 when I received from support the OID fix, but upgrading it became anyway 6.1.0.181...
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
deduplicat3d
Expert
Posts: 119
Liked: 12 times
Joined: Nov 04, 2011 8:21 pm
Full Name: Corey
Contact:

Re: Changing optimization breaks replication?

Post by deduplicat3d »

I'm using v6.1, and I switched the mode from LAN to WAN. After I made the switch everything worked without error. Since I did not delete the replication metadata, does that mean I'm still using LAN and it's just ignoring the setting for existing VM's? Is there any way to verify which mode I'm using? The VM's seem to be replicating faster, but not sure if it's just coincidence.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Changing optimization breaks replication?

Post by Vitaliy S. »

Hi Corey,

In order to apply new optimization settings you need to re-run your replication job from scratch (perform new full). As to the backup jobs, then the new configuration will be automatically applied after a full job run.

Thanks!
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Changing optimization breaks replication?

Post by tsightler » 1 person likes this post

If you want to be "advanced" you can just remove the replica metadata from the repository. The metadata will be recreated on the next run using the new settings and you won't loose your existing replicas, but the next run will take extra time.
Post Reply

Who is online

Users browsing this forum: No registered users and 28 guests