-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Oct 05, 2016 6:06 am
- Full Name: Matthew Kent
- Contact:
Quick sanity check please...
Hi all,
I'm currently evaluating Veeam B&R essentials standard. I have 2 sites and around 30 VMs that I need to backup.
I'd like to backup and keep 2 weeks of data locally so I've set up a backup job with 14 restore points and a weekly synthetic.
This backup also needs to be available off site, so I've created a continuous backup copy job to send this off site.
I'd also like to keep GFS locally for historical restore options so I've created a continuous backup copy job with 7 restore points; plus archives 4 x weekly, 3 x monthly, 4 x quarterly and 5 x yearly to local storage.
I've also created another continuous backup job with the same options but to the off site storage.
Is the above sensible?
How should I split up my 30 machines into jobs? I was thinking of three tiers of importance with around 10 VMs in each? Is this sensible, or should I have more or less VMs in a job?
Many thanks,
Matt
I'm currently evaluating Veeam B&R essentials standard. I have 2 sites and around 30 VMs that I need to backup.
I'd like to backup and keep 2 weeks of data locally so I've set up a backup job with 14 restore points and a weekly synthetic.
This backup also needs to be available off site, so I've created a continuous backup copy job to send this off site.
I'd also like to keep GFS locally for historical restore options so I've created a continuous backup copy job with 7 restore points; plus archives 4 x weekly, 3 x monthly, 4 x quarterly and 5 x yearly to local storage.
I've also created another continuous backup job with the same options but to the off site storage.
Is the above sensible?
How should I split up my 30 machines into jobs? I was thinking of three tiers of importance with around 10 VMs in each? Is this sensible, or should I have more or less VMs in a job?
Many thanks,
Matt
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Quick sanity check please...
Hi Matt and welcome to the community!
You backup scheme looks solid.
Regarding VMs grouping, in general we suggest either group VMs by OS type to get better deduplication or by SLA, as you mentioned.
What about your repositories, do they have hardware deduplication?
Thanks!
You backup scheme looks solid.
Regarding VMs grouping, in general we suggest either group VMs by OS type to get better deduplication or by SLA, as you mentioned.
What about your repositories, do they have hardware deduplication?
Thanks!
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Oct 05, 2016 6:06 am
- Full Name: Matthew Kent
- Contact:
Re: Quick sanity check please...
Thanks - the repositories are just local disks on windows boxes.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Quick sanity check please...
So in-line compression and deduplication are recommended. That`s also a matter of management usability, 3 jobs of 10 VMs looks fine.
I would recommend review best practices book.
I would recommend review best practices book.
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Oct 05, 2016 6:06 am
- Full Name: Matthew Kent
- Contact:
Re: Quick sanity check please...
Many thanks, I have in-line dedupe and optimal compression set on the jobs; should I be using dedupe friendly instead?
The best practice book looks great, guess I'll be reading tonight...
Many thanks!
The best practice book looks great, guess I'll be reading tonight...
Many thanks!
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Quick sanity check please...
It depends on your infrastructure and goal.
Dedupe-friendly is an optimized compression level for very low CPU usage. You can select this compression level if you want to decrease the load on the backup proxy.
High compression level provides additional 10% compression ratio over the Optimal level at the cost of about 10x higher CPU usage.
Dedupe-friendly is an optimized compression level for very low CPU usage. You can select this compression level if you want to decrease the load on the backup proxy.
High compression level provides additional 10% compression ratio over the Optimal level at the cost of about 10x higher CPU usage.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Quick sanity check please...
Hi Matt, just to check, how many offsite backup copy jobs do you have in total, one or two? According to this description you seem to have an unnecessary backup copy job (the one in bold), since its activity can be performed by the second offsite backup copy job.Matt@ wrote:I'd like to backup and keep 2 weeks of data locally so I've set up a backup job with 14 restore points and a weekly synthetic.
This backup also needs to be available off site, so I've created a continuous backup copy job to send this off site.
I'd also like to keep GFS locally for historical restore options so I've created a continuous backup copy job with 7 restore points; plus archives 4 x weekly, 3 x monthly, 4 x quarterly and 5 x yearly to local storage.
I've also created another continuous backup job with the same options but to the off site storage.
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Oct 05, 2016 6:06 am
- Full Name: Matthew Kent
- Contact:
Re: Quick sanity check please...
I had two, but reading your reply to me it now seems obvious that I'd only need one to copy everything off site. I've removed the job in bold.
Thanks,
Matthew
Thanks,
Matthew
Who is online
Users browsing this forum: Google [Bot] and 271 guests