Hi, i need to get a good understanding to // processing and concurrent tasks interactions and available levers to speed up backups.
We have 65 Jobs for about 700 vms to backup per night.
I observed that this high amount of jobs increase the collision probability in time scheduling. So i often sometime observe jobs waiting for hours for available resources to perform backups.
I have two huge proxies that remains lower than 25% CPU/RAM usage during backups. Concurrent tasks are set to 10 per proxy. (So i guess i could raise the max concurrent tasks supported).
On the other hand i have a bunch of repositories set up to 5 concurrent tasks each.
Is there a way to monitor the simultaneous sessions during backup windows to determine what are the resources busy enough to make my jobs wait for hours?
Are there others settings that can impact // processing and concurrent tasks?
I don't understand how resources are allocated since i started a job manually yesterday during backup window and this job waited more than 4 hours before really starting backups. On the other hand i found other jobs scheduled to start later that have been starting and finishing earlier than the job i started manually.
Could there be a way to especially prioritize a job against others or are we totally relying on resources allocation?
On last question, i found that inside a single job i can move up/down vms in the processing list. I guess it has a relation with the order the vms are processed inside a job. How about if i'm using vm folders as backup objects. Can i still define a proper vm processing order inside my job? I didn't get it.
Thanks for the answers then, and hope it will help others, i may not be the only one to ask these questions.