Comprehensive data protection for all workloads
Post Reply
SimonJGreen
Novice
Posts: 3
Liked: 1 time
Joined: Dec 30, 2012 9:29 pm
Full Name: Simon Green
Contact:

Architecture and job planning help

Post by SimonJGreen »

Hi,

We run a large VMWare infrastructure consisting of 3 clusters, 1 NFS SAN, and ~350 VMs.
  • There are 40 VMs in Cluster 1, 260 VMs in Cluster 2 and 50 VMs in Cluster 3.
  • We have built a Ubuntu/ZFS storage target for the Veeam storage for all jobs to use as their target.
  • We have a dedicated backup network set up with 4Gb connectivity to the storage target from each of the VMWare clusters.
  • We have installed a Proxy in a Windows VM on each of the Clusters, so we have 3 Proxies.
  • We have a single B&R server to manage all of this.
  • We would like to achieve a nightly backup of the entire infrastructure, with no retention beyond that.
My question is, how should we be configuring our jobs for optimum performance? Currently we have a single job for each cluster with that cluster set as the source (to automatically pick up new VMs). All the jobs are configured the same:
  • Machines to back up: The target cluster
  • Exclusions: The Proxy for that cluster
  • Backup proxy: Automatic
  • Repository: The NAS described above.
  • Restore points to keep: 1
  • Mode: Reversed incremental
  • Enable inline data deduplication
  • Compression optimal
  • Optimize for LAN target
  • Use changed block tracking data
  • Enable CBT for all protected VMs automatically
  • Enable automatic backup integrity checks
  • Exclude swap file blocks
  • Schedule: Run automatically, daily, 22:00, everyday
  • Retry failed VMs 3 times, wait 1 minute
We seem to achieve about 30MB/s on each job at the moment, which is not fast enough to fit in a single day.
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Architecture and job planning help

Post by dellock6 »

- All VMs in a single job are processed sequentially, you can probably speed up the nightly activity by splitting jobs into little sets, and allow proxy to execute 2 cuncurrent jobs.
- check if both proxies and repository are configured to allow enough cuncurrent jobs (2 per proxy, and 6 for the shared repository)
- do not use automatic proxy, but assigned jobs for VM of a given cluster to the same proxy installed into that cluster, otherwise Veeam would also use proxies from other clusters and will run in network mode if the datastore is not shared among all the clusters

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
SimonJGreen
Novice
Posts: 3
Liked: 1 time
Joined: Dec 30, 2012 9:29 pm
Full Name: Simon Green
Contact:

Re: Architecture and job planning help

Post by SimonJGreen »

Thanks for this.

Few questions:
1. Is it possible to break out our existing job/cluster in to multiple jobs/cluster without having to redo all the full backups?
2. I've heard people talk about Hot Add/Virtual Appliance mode on the proxies not working right with NFS, are you able to comment on this?
3. Is reverse incremental the right mode for my scenario above?

Simon
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Architecture and job planning help

Post by dellock6 »

1. Only for the job that will continue to keep the original VMs, all other VMs moved to another job will be backed up in full at first execution
2. Not aware of this limit, where did you read about it? NFS cannot be used in DirectSAN mode, but no problem in hotadd
3. yes, since you need few restore points, and with reverse incremental the last backup is always full

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Architecture and job planning help

Post by veremin »

We seem to achieve about 30MB/s on each job at the moment, which is not fast enough to fit in a single day.
Additionally, it might be worth taking a look at the bottleneck statistics you get in order to determine what can be done to improve backup performance.

Furthermore, while specifying number of concurrent jobs per backup proxy, kindly keep the following proxy resource requirements in mind:

Ram : 2GB + 2GB*Concurrent Job
(v)CPU : 2*Concurrent Job


Hope this helps.
Thanks.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Architecture and job planning help

Post by chrisdearden »

What sort of spec is that storage box ? Remember that we are doing a read write mix with a reverse incremental, should be about 30% random reads. How is your source storage split or is it a single NFS mount per cluster?
SimonJGreen
Novice
Posts: 3
Liked: 1 time
Joined: Dec 30, 2012 9:29 pm
Full Name: Simon Green
Contact:

Re: Architecture and job planning help

Post by SimonJGreen » 1 person likes this post

OK I've broken things up in to 5 jobs and we are now receiving at 1.2Gb/s on the NAS (154MB/s), which is much better and close to what I expected :)

In answer to the question above, our storage target is 12 core, 32GB RAM, and 32 disks (in a ZFS stripe of 4xRaidZ1),

I'll see how things are looking tomorrow morning, but this looks promising!

Simon
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Architecture and job planning help

Post by dellock6 »

Nice machine Simon, I do love ZFS :)
Glad the ideas we gave you are working as expected.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], ilisousou123, marcin.dudziak, Markush-VE, xzvf and 150 guests