Comprehensive data protection for all workloads
Post Reply
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Ressource allocation / tasks per repo

Post by mkretzer »

Hello,

we are currently in the middle of enabling REFS and Per-VM backup files. For that, we have a temporary storage - sadly formatted with 4 k blocks. We have two repositories on that volume to seperate two departments. The physical proxy which is shared by both departments has 16 cores and thus can provide 16 streams.

Now we tried 2 "resource" settings on our repos which both brought us different problems:

Limiting tasks per repo to 12:
Good:
- The backups from one department cannot completely stop the backups from the other which is good because we have specific backup-windows where the backups MUST run
Bad:
- Creation of synthetic fulls takes up ALL the 12 tasks. So with 4 k blocks and the slower storage "fast" clone still takes 1-2 hours. We have 13 backup jobs, 3 of them need to create synthetics every day, that means that instead of 4 hours all backups stop for the synthetic creation and wait until it is finished. The thing is "fast clone" is not really doing much storage load according to our storage system so the system could easily write backups in that time.
- Our backup copy jobs also hang behind the synthetics and since we only have full bandwith for Veeam at night (because the line is shared) this means copy jobs do not finish in time

Not limiting tasks per repo:
Good:
- Backups get finished faster even while synthetics are running because the storage load is not that big because of fast clone.
- Copy jobs finishe in time
Bad:
- If the backups are not finished in time because for example retention points are beeing deleten the backups from the other department cannot start because now the proxy connections are the limit

Is there any way to limit the number of tasks "used" by fast clone with per-VM backup chains? I guess the simplest solution would again be to disable per-VM but my worries right now are that we then would have other problems.

Markus
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Ressource allocation / tasks per repo

Post by veremin »

What about creating different folders (one per department or department's job) on ReFS volume, assigning repository role to them, setting desired task limit on each repository and pointing jobs to newly-created repositories in accordance with your needs? Thanks.
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Ressource allocation / tasks per repo

Post by mkretzer »

So alot more repos on that drive? We thought about that but finding the "sweetspot" of not hitting the proxy limit and not limiting the jobs / putting to much load on the REFS file operations (yes, i believe the REFS with 4 k might be more of a problem here than the actual physical storage) is quite difficult.

Basically our problem is that we cannot granulary define the limits per type of operation but only per repo/proxy.

One question: Right now we start the jobs after 15 minutes after the previous one. This leads to a situation where all ressources are avaiable for the synthetic operation. What would happen if we start all the backups at the same time (and thus only a part of the ressources are avaiable at the start of the synthetic), will the synthetic operation "take" the avaiable ressources/threads after it started or will it keep using the same number of threads initially avaiable?
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], Google [Bot], Gregor, hyvokar, Semrush [Bot] and 138 guests