-
- Expert
- Posts: 122
- Liked: 7 times
- Joined: Mar 27, 2012 10:13 pm
- Full Name: Chad Killion
- Contact:
How to effectively use RPO tags in a large environment?
Ok, so we are looking at moving from backing up a large environment which occurs now via 1 job per datastore, to backing up using folders as we are moving to 100% vSAN meaning there will only be 1 datastore per cluster now. That will move us from 40+ datastores (and jobs) to 4 datastores. I just read the veeam paper about using tags to backup an environment dynamically which sounds really cool, however, I am having a hard time seeing how this will work for a larger environment of say 700 vms.
So lets say I have 700 VMs and have the RPO tags of 24-hour, 12-hour, 4-hour, and No Backup. When I look at this environment, 500 will have the 24-hour RPO, 50 will have the 12-hour RPO, 50 will have the 4-hour RPO, and 100 will have No Backup. So when I create the corresponding jobs, the 24-hour RPO job will have to process 500 machines? This doesn't seem like a very efficient process. This wouldn't be much different then just backing up an entire 70TB vSAN database, it seems. Can someone shed some light on to how tag-based backup policies would be effective in a large environment like this? We have jobs now that run 12+ hours and only backup 20-30 VMs, seems like 500 vm job would never finish.
Chad
So lets say I have 700 VMs and have the RPO tags of 24-hour, 12-hour, 4-hour, and No Backup. When I look at this environment, 500 will have the 24-hour RPO, 50 will have the 12-hour RPO, 50 will have the 4-hour RPO, and 100 will have No Backup. So when I create the corresponding jobs, the 24-hour RPO job will have to process 500 machines? This doesn't seem like a very efficient process. This wouldn't be much different then just backing up an entire 70TB vSAN database, it seems. Can someone shed some light on to how tag-based backup policies would be effective in a large environment like this? We have jobs now that run 12+ hours and only backup 20-30 VMs, seems like 500 vm job would never finish.
Chad
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: How to effectively use RPO tags in a large environment?
Chad, basically you will have to backup the same amount of data as you do now, right? Provided parallel processing properly does its job, it doesn't matter whether you have a single job with all VMs or several jobs, unless you need to have multiple jobs to be able to saturate the target storage with multiple write streams (this, however, will be addressed with the per-VM backup chains in v9, so even a single job will be able to create multiple streams).
-
- Enthusiast
- Posts: 58
- Liked: 13 times
- Joined: Sep 09, 2010 9:45 am
- Full Name: Anders Lorensen
- Contact:
Re: How to effectively use RPO tags in a large environment?
I have one customer with about the same size environment (110 TB storage, 750 VM's) They use tags for all their backups.
The tags are deployed via Veeam One based on the name of the VM. (all VM's follow a naming standard)
The tags are named <systemname>-<prd/test/dev/pp> (for example exc-prd for Exchange Production)
A Powershell script created all the Veeam Backup jobs, and depending on the tag name sets retention etc. (for example Production jobs do weekly active full backups, test do monthly fulls etc.)
The result was 156 Veeam backup jobs and 156 Tape jobs. - Which is quite a bit and makes the Veeam gui a bit sluggish to Work with, but very Little Work went into creating the actual jobs.
And the end result is actually very nice. Easy to Work with, and job sizes are quite fine. I can only recommend the solution.
You will find that the Tag implementation in veeam (both B&R and ONE) is still in "version 1" though, but after a few weeks of getting used to the bugs and lacking features, you'll be happy with it, I bet.
All this requires a good naming standard in place for VM names though, and eighter manual Application awareness configuration, or a very simple configuration of it, automated.
/Anders
The tags are deployed via Veeam One based on the name of the VM. (all VM's follow a naming standard)
The tags are named <systemname>-<prd/test/dev/pp> (for example exc-prd for Exchange Production)
A Powershell script created all the Veeam Backup jobs, and depending on the tag name sets retention etc. (for example Production jobs do weekly active full backups, test do monthly fulls etc.)
The result was 156 Veeam backup jobs and 156 Tape jobs. - Which is quite a bit and makes the Veeam gui a bit sluggish to Work with, but very Little Work went into creating the actual jobs.
And the end result is actually very nice. Easy to Work with, and job sizes are quite fine. I can only recommend the solution.
You will find that the Tag implementation in veeam (both B&R and ONE) is still in "version 1" though, but after a few weeks of getting used to the bugs and lacking features, you'll be happy with it, I bet.
All this requires a good naming standard in place for VM names though, and eighter manual Application awareness configuration, or a very simple configuration of it, automated.
/Anders
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: How to effectively use RPO tags in a large environment?
Just a reference to the white paper Chad is talking about > https://www.veeam.com/wp-advanced-polic ... -tags.html
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: How to effectively use RPO tags in a large environment?
As Alexander already said, the amount of data to backup is the same, with or without tags, unless you leverage tags also to use differenr RPOs in the environment by creating more frequent backups. Performance are not affected by the number of VMs per job, but the speed of the different components, and the numbers of available processing slots in the proxies.
For the size, in addition to parallel processing helping to spread the load to all the available proxies, I can also recommend per-vm backup chains (http://www.virtualtothecore.com/en/veea ... up-chains/). With it, you can be assured that no job will ever grow in size regardless the number of VMs in it, since each VM will be stored in a different file. So, having 10 or 500 VMs in the same job will not matter.
For the size, in addition to parallel processing helping to spread the load to all the available proxies, I can also recommend per-vm backup chains (http://www.virtualtothecore.com/en/veea ... up-chains/). With it, you can be assured that no job will ever grow in size regardless the number of VMs in it, since each VM will be stored in a different file. So, having 10 or 500 VMs in the same job will not matter.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Enthusiast
- Posts: 58
- Liked: 13 times
- Joined: Sep 09, 2010 9:45 am
- Full Name: Anders Lorensen
- Contact:
Re: How to effectively use RPO tags in a large environment?
Backup job size matters a Whole lot, both when it comes to backup, but even more when it comes to restores. You Loose RPO and RTO control, and will start rely on randomness.
When it comes to restore, having many VM's in a job is not very smart. Same goes for SureBackup testing.
Being able to do active/synthetic full backups on different days for different jobs is also lost - you are stuck doing it the same time for everything. - Not very practical.
Add copy jobs on top of it, and it gets even messier, as they dont do parallel processing.
Next problem is the fact that Veeam cannot handle large .VBM files (Veeam basicly refuses to Work when they hit 1 GB in size) - And huge jobs with application awareness will create huge meta files.
So having 10 or 500 VM's in a job matters, and alot! (at least in version 1-8, I havent played with v9 yet)
/Anders
When it comes to restore, having many VM's in a job is not very smart. Same goes for SureBackup testing.
Being able to do active/synthetic full backups on different days for different jobs is also lost - you are stuck doing it the same time for everything. - Not very practical.
Add copy jobs on top of it, and it gets even messier, as they dont do parallel processing.
Next problem is the fact that Veeam cannot handle large .VBM files (Veeam basicly refuses to Work when they hit 1 GB in size) - And huge jobs with application awareness will create huge meta files.
So having 10 or 500 VM's in a job matters, and alot! (at least in version 1-8, I havent played with v9 yet)
/Anders
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: How to effectively use RPO tags in a large environment?
I was talking specifically about v9 and per-vm jobs, and with this option there's also one vbm file per chain, so one per vm.
The job may have 500 VM, but backup files will always have 1 VM in it , and one vbm file, once you enable per-vm chains.
The job may have 500 VM, but backup files will always have 1 VM in it , and one vbm file, once you enable per-vm chains.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 31 guests