Comprehensive data protection for all workloads
Post Reply
AlainRussell
Enthusiast
Posts: 49
Liked: 1 time
Joined: Aug 27, 2011 12:04 am
Full Name: Alain Russell
Contact:

WAN Accelerator Failures (00438145)

Post by AlainRussell »

Hi - I'm using a backup copy job to get backups offsite overnight, when this was originally setup I used WAN accelerators and had numerous issues with failures (case 00438145). In the end I gave up trying to use the accelerator and just did the backup copy directly, everything seemed to be ok for a week+.

Yesterday I re-added the source/target WAN accelerators to the backup copy job and saw the same failures we were originally seeing "Failed to decompress LZ4 block: Bad cdc ...", I've since removed the WAN accelerators again from this copy job.

Today the backup copy has started and I'm not seeing more errors (on the same VM's that had the error above) - this time "All instances of the storage metadata are corrupted. Exception from server".

Is there a way to remove the incremental backups that caused errors in this chain - unfortunately each time I've gotten support the answer has been "restart the backup and see if it works" which isn't so much an option for offsite backups of > 1TB at a time. Also - why would a single day of failures corrupt a full backup chain, with GFS rotation this isn't ideal?

Thanks
Alain
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: WAN Accelerator Failures (00438145)

Post by veremin »

"Failed to decompress LZ4 block: Bad cdc ..."
May it be related to the problems of underlying storage? You might want to temporarily send data to different device and see whether this issue shows up again. Also, what about the affected restore points – did you try to perform any restoration activity and see whether they are corrupted or not?
unfortunately each time I've gotten support the answer has been "restart the backup and see if it works" which isn't so much an option for offsite backups of > 1TB at a time

In fact, after initial synchronization step backup copy job is always incremental; so, it won’t try to push the full backup again.
Also - why would a single day of failures corrupt a full backup chain, with GFS rotation this isn't ideal?
Actually, if you encounter some corruption issue in the middle of backup chain, not the whole backup chain will be affected, but rather restore points, starting from “corrupted” one.

Thanks.
Gostev
Chief Product Officer
Posts: 31533
Liked: 6703 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: WAN Accelerator Failures (00438145)

Post by Gostev »

I don't want to blame you hardware, as this is the easiest thing to do, but I can confirm the issues you are experiencing are pretty unique and not reported by other users. Even internally, we have been running backup copies with WAN acceleration over Atlantic for a while now, and have not seen similar issues.

Failures when transferring incremental backup data do not break full backup chain.

May be the good next step will be escalating your support cases to a higher support tier for better investigation.
AlainRussell
Enthusiast
Posts: 49
Liked: 1 time
Joined: Aug 27, 2011 12:04 am
Full Name: Alain Russell
Contact:

Re: WAN Accelerator Failures (00438145)

Post by AlainRussell »

I'm not ready to blame the storage yet either. I tried a restore from a few of the backup points and they all failed today so I've deleted them all from disk and am currently running a full first backup - will update this thread once this completes and runs a few days. Thanks.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: WAN Accelerator Failures (00438145)

Post by veremin »

As mentioned, in order to completely exclude storage device from the scope of potential issue, it might be worth sending data to different appliance and seeing whether the problem is reproducible or not. Thanks.
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 94 guests