In SureBackup Job you can choose the maximum time it takes to boot the OS.
Every now and again I have issue with this setting, for two scenarios, and wonder if consider this feature enhancement WITHIN the SureBackup session Live.
Let me explain.
I have my maximum time set to a decent length.
I am booting Linux Guest VM OS's in the SureBackup job.
SureBackup Jobs are successful for a long period, then start to fail. Investigate the failure.
So I Start the failed SureBackup job from its Session detail in Troubleshoot mode to diagnose the failure. However, it still goes to failed state as the boot up maximum time is still considered in Troubleshoot mode.
Opening the console of the VM, I see that its performing fsck's on filesystem as its been over 180 days since the last one - Ah!
In Linux we have filesystems that are checked at mount for a fsck in the last 180 days (default) BTW Running SUSE Linux Enterprise 11.
If more than 180, then total time taken to complete fsck on each filesystem can go way over the maximum time set in Veeam and Surebackup job closes in Failed status. Up to 180 days the SureBackup job was a success.
fsck is a good way of course to validate that your filesystems are good. So we do not want to remove this check of the filesystems.
I would have expected in Troubleshoot mode that the maximum time to boot OS would be ignored? You are troubleshooting - right?
Similarly there is also the scenario with re-enumerating of NIC cards in Linux at the boot of the VM in the SureBackup job. Yes there are fixes in the OS, but sometimes to prove or fix the booting in this SureBackup VM we require extra time and it would be nice if we could extend on the fly these maximum times - without having to edit the SureBackup Job(s)
So in the Troubleshoot mode of SureBackup maybe have a right click option in the log area where Messages are displayed that just said "Extend wait time again"
This would extend the Waiting time by the same set value. But do it live.
Thanks for reading.