Finally we have upgraded to v12, now I started the first backup chain upgrade. First small backup job yesterday with 5 VMs (850 GB) took 40min. Today I started another upgrade of a small job with 19 VMs (8 TB) and it is now running for 1:30h and has just processed 2 VMs. It seems to take a long time for the capacity tier. Currently no offloads are running. Usually I see good performance during offloads, peak throughput of ~500 MByte/s.
18.08.2023 09:22:00 In progress [xxxx] Upgrading backup chain parts located on Capacity Tier and Archive Tier extents... 1:12:48
We have other jobs with 80-120 VMs and 70 TB. And copy chains of each job has to be upgraded too. Currently this looks like a cleanup issue in capacity tier and we need to tune some parameters.
Thanks for the case number.
Which build did you have installed? In the case I can see you have provided build: 11.0.1.1261 P20230227
We had a known issue about long chain upgrades where a capacity tier is involved.
But it is fixed in P20230412 or later ([V12] Top issues tracker - Issue 3 )
Since Wednesday we are on latest v12 release. It seems that removing some offload limits from registry and Linux agent settings solved this issue. We had to slow down offloads because we got AWS S3 "slow down" messages and I was unable to delete some orphaned backups in capacity tier. Looks like I have to create a little script to change regkeys and agent settings depending on the use case
For us, V12 chain upgrade takes little over one minute. Thats very bad as we have ~1500 objects per backup copy job. So if we do this we have no copies for over a day.
I didn't have problems with the upgrade time anymore, but scheduling is broken afterwards.
As per known issue: If chain upgrade was terminated incorrectly, it may leave a partially upgraded backup set in the database table [backup.model.backups].
So double check if your jobs really start as expected if you not have any other checks in place. Not all schedules were broken, but some jobs just didn't start. Nice, 3 known issues one week after update And this is already 3. patch for v12.
Interestingly, it works now much faster, but we had to increase SQL timeout significantly to get it to work correctly at all for some backups...
V12 is really not much fun
I didn't have problems with the upgrade time anymore, but scheduling is broken afterwards.
We've checked with support folks and investigation is in progress. Please stay tuned for updates from support team!
Interestingly, it works now much faster, but we had to increase SQL timeout significantly to get it to work correctly at all for some backups... V12 is really not much fun
Sorry to hear that you've faced problems after the upgrade Markus! The issues you've faced have different root cause are now investigated by support team and RnD folks. Lets await for the investigation results!