Anyone else seeing problems offloading to Wasabi US-East-2? It started around 10-14 days ago. Multiple deployments of ours offloading to US-East-2 are having timeout errors. Veeam offload jobs have hours of time where it just sits transferring nothing, and letting jobs occupy job slots. We see errors such as:
Code: Select all
HTTP exception: Retrieving message chunk header, error code: 110
Exception from server: HTTP exception: Retrieving message chunk header, error code: 110
Checkpoint cleanup failed Details: HTTP exception: Retrieving message chunk header, error code: 110
Could not allocate processing resources within allotted timeout (14400 sec)
It's just a mess with offload jobs piling up and not completing. I have Veeam and Wasabi cases open, but so far neither are really going anywhere. Veeam says they are tracking an increasing number of issues with customers offloading to Wasabi US-East-2. Wasabi tells me to just run new active fulls for everything, which Veeam says not to do, and I don't want to do that either. At the end of the day, we are left with very long running Veeam jobs where they sit and sit doing nothing, the whole copy operation grinds to a halt. I have to run the performance extent of the SOBR at twice the concurrent jobs as the capacity extent at Wasabi so inbound copy jobs to the SOBR still run and aren't waiting for a job slot from the never ending Wasabi offloads. Jobs sit at 99% forever, or just stop sending anything at random other percentages of completion. I'll look at an offload job and 4 VM backups are sitting there at various percentages of completion, not moving at all, no traffic sent in hours. It seems that when Wasabi does have issues, which I do believe this to be a Wasabi issue, but Veeam doesn't handle the failures well, and the whole thing grinds to a halt.
Veeam Ticket: 05638326
Wasabit Ticket: 73029
Anyone else seeing any issues with Wasabi lately?