-
- Service Provider
- Posts: 398
- Liked: 57 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Feature Request - Retry After Restart
Support has informed me that if a job is interrupted due to a computer restarting that it will not retry the failed job. I would like to request that it retry after a restart that occurs while a job is running.
-
- Product Manager
- Posts: 14322
- Liked: 2890 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request - Retry After Restart
Hello,
ah, sounds like reboots are now confirmed as reason for many of your other requests.
Question: do you use the backup cache?
Best regards,
Hannes
ah, sounds like reboots are now confirmed as reason for many of your other requests.
Question: do you use the backup cache?
Best regards,
Hannes
-
- Service Provider
- Posts: 398
- Liked: 57 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Feature Request - Retry After Restart
I am under the impression that reboots are the cause of a lot of the failures that aren't retrying now. I'm not going to say that I'm convinced it's 100% of them based on the number of different error messages that I see, but perhaps a restart results in that many different potential error messages depending on how clean of a shut down is performed and what part of the process it's at when the shut down occurs.
We do use the backup cache feature, however for the particular computer I was asking support about it has the problem where it was disconnected from the internet but powered on enough, that the backup cache got full. But now it's not online long enough to upload the contents of the backup cache.
My understanding is that when a normal incremental backup is performed, if only part of the new data on the computer (the "increment") can be transferred, that the transferred portion is deleted when the next job starts, so that only completed "increments" are retained. I assume the same applies to the backup cache sync, if it can't transfer an entire increment out of the cache then it won't ever finish, it'll just keep starting over until it eventually stays connected long enough to transfer the entire increment in one session.
Is that accurate?
Assuming that is accurate, then I'm thinking the best configuration for the particular computer is actually to disable the cache, since cache syncing blocks new jobs going direct to the repository until the cache syncing finishes. So that when the computer is online long enough (which is rare) it will actually try backing up new data instead being stuck attempting to sync weeks old data that it's never actually finishing.
Any thoughts on that or if there's a better potential scenario for that? The customer is aware the computer doesn't back up often due to not being online very often, but if possible I'd still like to improve on the current "once every few weeks" backup scenario they have. Hopefully without just reducing what is being backed up from the computer. My experience with other software (such as Acronis) is that partially transferred backups are still valid backups, just as though only the files that were transferred actually changed, so the next time the backup runs it doesn't need to start all over again. Which then allows very intermittently connected computers to still back up everything provided files aren't changing so fast on the computer that it can't ever catch up, it just means not everything is necessarily backed up all at once.
We do use the backup cache feature, however for the particular computer I was asking support about it has the problem where it was disconnected from the internet but powered on enough, that the backup cache got full. But now it's not online long enough to upload the contents of the backup cache.
My understanding is that when a normal incremental backup is performed, if only part of the new data on the computer (the "increment") can be transferred, that the transferred portion is deleted when the next job starts, so that only completed "increments" are retained. I assume the same applies to the backup cache sync, if it can't transfer an entire increment out of the cache then it won't ever finish, it'll just keep starting over until it eventually stays connected long enough to transfer the entire increment in one session.
Is that accurate?
Assuming that is accurate, then I'm thinking the best configuration for the particular computer is actually to disable the cache, since cache syncing blocks new jobs going direct to the repository until the cache syncing finishes. So that when the computer is online long enough (which is rare) it will actually try backing up new data instead being stuck attempting to sync weeks old data that it's never actually finishing.
Any thoughts on that or if there's a better potential scenario for that? The customer is aware the computer doesn't back up often due to not being online very often, but if possible I'd still like to improve on the current "once every few weeks" backup scenario they have. Hopefully without just reducing what is being backed up from the computer. My experience with other software (such as Acronis) is that partially transferred backups are still valid backups, just as though only the files that were transferred actually changed, so the next time the backup runs it doesn't need to start all over again. Which then allows very intermittently connected computers to still back up everything provided files aren't changing so fast on the computer that it can't ever catch up, it just means not everything is necessarily backed up all at once.
-
- Product Manager
- Posts: 14322
- Liked: 2890 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request - Retry After Restart
the reason why I asked: backup cache is only used when the connection fails completely.
I have small internet bandwidth at home and my backup takes many hours to cloud object storage. In that scenario, the software does not use the backup cache at all. So my first suggestion would be to change the product in a way that it always goes to backup cache first. Then upload things. By doing that, the side-effect should be that reboot is no problem.
I have small internet bandwidth at home and my backup takes many hours to cloud object storage. In that scenario, the software does not use the backup cache at all. So my first suggestion would be to change the product in a way that it always goes to backup cache first. Then upload things. By doing that, the side-effect should be that reboot is no problem.
-
- Service Provider
- Posts: 398
- Liked: 57 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Feature Request - Retry After Restart
That is the current behavior, however it doesn't often stay online long enough to upload a complete restore point from the cache all at once. And it starts the restore point over from the beginning each time, but doesn't finish before going offline again. So it's not getting anywhere.
-
- Product Manager
- Posts: 14322
- Liked: 2890 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request - Retry After Restart
yes... we have that improvement on our list. But no time estimation
-
- Service Provider
- Posts: 398
- Liked: 57 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Feature Request - Retry After Restart
Okay, as an alternative solution, is there any semi-supported/recommended method to initiate a backup automatically a few minutes after the computer boots up and connects to the internet?
My initial thought is a scheduled task to run after the computer starts up, that runs a script to check internet connectivity every like minute, then runs the backup job via the appropriate PowerShell command once a connection is established. But that sounds a little complicated and I'm not fond of having a complicated non-standard setup on one computer.
My initial thought is a scheduled task to run after the computer starts up, that runs a script to check internet connectivity every like minute, then runs the backup job via the appropriate PowerShell command once a connection is established. But that sounds a little complicated and I'm not fond of having a complicated non-standard setup on one computer.
-
- Product Manager
- Posts: 14322
- Liked: 2890 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request - Retry After Restart
what you describe is also what comes into my mind.
-
- Product Manager
- Posts: 14417
- Liked: 1576 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Feature Request - Retry After Restart
Hello guys,
I've shared the feedback from this thread with our RnD folks, we will investigate the possibility to retry the job when the machine is being restarted in the next major version (feels like it should also address the resume issue caused by machine being restarted from another thread). Thank you for your feedback!
I've shared the feedback from this thread with our RnD folks, we will investigate the possibility to retry the job when the machine is being restarted in the next major version (feels like it should also address the resume issue caused by machine being restarted from another thread). Thank you for your feedback!
-
- Service Provider
- Posts: 398
- Liked: 57 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Feature Request - Retry After Restart
That should also address the resume issue, based on current troubleshooting anyways, current knowledge of what the problem is.
Who is online
Users browsing this forum: No registered users and 23 guests