Comprehensive data protection for all workloads
Post Reply
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Per vm files question

Post by backupquestions »

I tried to search as I know someone ran into an issue like this but can't find it.

So if my physical veeam server with 16 cores is set up for 16 "tasks"... I make a backup job with 100 vms and use per vm backup files...

When it is merge time or in my case fast clone synthetic full on the weekend... up to 16 vms will be fast cloning at a time and that fills up all the tasks on the server. This might mean my backup copy jobs will not start running for too long and whatever other tasks I might have going on right?

Where as if you don't use per vm files then the entire job only takes up one task.

Is there a way around this problem? Maybe configure limit of tasks per repository to 4 or 6 and that way each job that uses different repos can all run at same time?
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK »

Hello,
How long does the synthetic full take? For 100 VMs it should not be hours, right (depending on amount of data & chain length)?

What makes you believe that it is a good idea to run BCJ and merges at the same time? They just add more IO to your server and it seems that it is already under high load.

Options:
1) Did you try more repository tasks to make the merges faster? A little overbooking of cores is usually okay.
2) Not using synthetic fulls. What is the reason that you use synthetic fulls? There are some use cases with tape, but tape is not mentioned in your post.

Best regards,
Hannes
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions »

I thought with refs it would be good to do synthetic fulls on weekend. To benefit from fast clone and spaceless full. I thought this is what a lot of customers are doing? You are thinking incremental forever would be better?
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK » 1 person likes this post

fastclone "links" to the same blocks. If a block breaks (physically), then each link to that block is also broken (that's the same with all deduplication technologies). One of the reasons why we push the 3-2-1 rule

Yes, many people do synthetic fulls. Probably because they always did it in the past. Maybe because fulls were great in the dark ages of direct-to-tape-backup. Or just to feel better :-)

incremental forever is not better or worse (well, one could argue that less metadata operations might be better for ReFS). It seems that you want that your BCJs start earlier. It's just an option to fulfill your request.
vmJoe
VeeaMVP
Posts: 426
Liked: 103 times
Joined: Aug 02, 2011 1:06 pm
Full Name: Joe Gremillion
Location: Dallas, TX USA
Contact:

Re: Per vm files question

Post by vmJoe »

One thing to note is that the file copy for the Backup Copy Jobs and the merge/synthetic formation process is performed by the veeamagent on the repo server, not the proxy server. The Backup Copy Job won't start processing until a backup job is complete. Veeam fast cloning on a ReFS repository should really speed up the merge and synthetic full process and help your backup copy jobs (BCJ) start faster.

As Hannes mentioned above, the synthetic full process using fast clone does come with a potential issue so a forever forward incremental (FFI) is of great use!
Joe Gremillion
NA Core Solutions Architect - Central region
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions »

The merge process on incremental forever uses fast clone too though doesn't it?

It sounds like you guys are saying it is dangerous to use weekly synthetic fulls with refs due to this issue.

My scenario is only a 2 week retention but with a weekly synthetic full. I would run this for the next 5 years. Are you saying it is likely corruption would happen unless I use incremental forever?

I will have Veeam cloud connect and also object storage being used so I will have 3 2 1 rule.
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK »

well, it (probably) was an issue with older ReFS versions (issues on Windows side) on some customers with not enough RAM... so for today with only 2 weeks I would not care too much about that.
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions »

Ok thanks. Additionally using per-vm files, if there was corruption it would probably affect only one vm rather than all with old style non per-vm, right?

Only 2 weeks retention, but remember this would be block cloning once per week with syntethetic full. So that's 52 block clones per year over 5 years, all reliant upon the first full.

So it's a small chance made even smaller, and I just take a new full backup of that vm to clear it etc.
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK »

if there was corruption it would probably affect only one vm rather than all with old style non per-vm, right?
correct
Post Reply

Who is online

Users browsing this forum: dbaages, ravi1988 and 163 guests