Comprehensive data protection for all workloads
Post Reply
backupquestions
Enthusiast
Posts: 98
Liked: 9 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Per vm files question

Post by backupquestions » Jun 25, 2019 12:13 am

I tried to search as I know someone ran into an issue like this but can't find it.

So if my physical veeam server with 16 cores is set up for 16 "tasks"... I make a backup job with 100 vms and use per vm backup files...

When it is merge time or in my case fast clone synthetic full on the weekend... up to 16 vms will be fast cloning at a time and that fills up all the tasks on the server. This might mean my backup copy jobs will not start running for too long and whatever other tasks I might have going on right?

Where as if you don't use per vm files then the entire job only takes up one task.

Is there a way around this problem? Maybe configure limit of tasks per repository to 4 or 6 and that way each job that uses different repos can all run at same time?

HannesK
Veeam Software
Posts: 3698
Liked: 441 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK » Jun 25, 2019 5:58 am

Hello,
How long does the synthetic full take? For 100 VMs it should not be hours, right (depending on amount of data & chain length)?

What makes you believe that it is a good idea to run BCJ and merges at the same time? They just add more IO to your server and it seems that it is already under high load.

Options:
1) Did you try more repository tasks to make the merges faster? A little overbooking of cores is usually okay.
2) Not using synthetic fulls. What is the reason that you use synthetic fulls? There are some use cases with tape, but tape is not mentioned in your post.

Best regards,
Hannes

backupquestions
Enthusiast
Posts: 98
Liked: 9 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions » Jun 25, 2019 12:24 pm

I thought with refs it would be good to do synthetic fulls on weekend. To benefit from fast clone and spaceless full. I thought this is what a lot of customers are doing? You are thinking incremental forever would be better?

HannesK
Veeam Software
Posts: 3698
Liked: 441 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK » Jun 25, 2019 12:59 pm 1 person likes this post

fastclone "links" to the same blocks. If a block breaks (physically), then each link to that block is also broken (that's the same with all deduplication technologies). One of the reasons why we push the 3-2-1 rule

Yes, many people do synthetic fulls. Probably because they always did it in the past. Maybe because fulls were great in the dark ages of direct-to-tape-backup. Or just to feel better :-)

incremental forever is not better or worse (well, one could argue that less metadata operations might be better for ReFS). It seems that you want that your BCJs start earlier. It's just an option to fulfill your request.

vmJoe
Veeam Software
Posts: 333
Liked: 67 times
Joined: Aug 02, 2011 1:06 pm
Full Name: Joe Gremillion
Location: Dallas, TX USA
Contact:

Re: Per vm files question

Post by vmJoe » Jun 26, 2019 1:00 am

One thing to note is that the file copy for the Backup Copy Jobs and the merge/synthetic formation process is performed by the veeamagent on the repo server, not the proxy server. The Backup Copy Job won't start processing until a backup job is complete. Veeam fast cloning on a ReFS repository should really speed up the merge and synthetic full process and help your backup copy jobs (BCJ) start faster.

As Hannes mentioned above, the synthetic full process using fast clone does come with a potential issue so a forever forward incremental (FFI) is of great use!
Joe Gremillion
NA Core Solutions Architect - Central region

backupquestions
Enthusiast
Posts: 98
Liked: 9 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions » Jul 01, 2019 12:22 pm

The merge process on incremental forever uses fast clone too though doesn't it?

It sounds like you guys are saying it is dangerous to use weekly synthetic fulls with refs due to this issue.

My scenario is only a 2 week retention but with a weekly synthetic full. I would run this for the next 5 years. Are you saying it is likely corruption would happen unless I use incremental forever?

I will have Veeam cloud connect and also object storage being used so I will have 3 2 1 rule.

HannesK
Veeam Software
Posts: 3698
Liked: 441 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK » Jul 01, 2019 3:51 pm

well, it (probably) was an issue with older ReFS versions (issues on Windows side) on some customers with not enough RAM... so for today with only 2 weeks I would not care too much about that.

backupquestions
Enthusiast
Posts: 98
Liked: 9 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Per vm files question

Post by backupquestions » Jul 01, 2019 4:05 pm

Ok thanks. Additionally using per-vm files, if there was corruption it would probably affect only one vm rather than all with old style non per-vm, right?

Only 2 weeks retention, but remember this would be block cloning once per week with syntethetic full. So that's 52 block clones per year over 5 years, all reliant upon the first full.

So it's a small chance made even smaller, and I just take a new full backup of that vm to clear it etc.

HannesK
Veeam Software
Posts: 3698
Liked: 441 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: Per vm files question

Post by HannesK » Jul 01, 2019 4:08 pm

if there was corruption it would probably affect only one vm rather than all with old style non per-vm, right?
correct

Post Reply

Who is online

Users browsing this forum: No registered users and 19 guests