Splitting Up Large Exchange VM Backups

VMware specific discussions

Splitting Up Large Exchange VM Backups

Veeam Logoby chjones » Wed Oct 07, 2015 7:25 pm 1 person likes this post

Hi all,

We have finally completed our project to migrate from Exchange 2010 on physical servers to Exchange 2013 that is now 100% virtual. We have one mailbox server in both of our two main datacentres and a front end CAS server in each datacentres also. The mailbox servers are in a DAG and all mailboxes are replicated between the two mailbox server VMs so we don't need to replicate servers as Exchange takes care of this for us. However, we do need to backup the mailbox servers with Veeam and then write the Weekly Active Full Backup to tape.

We have both Mailbox Servers protected by their own Veeam Jobs which write data to a HP StoreOnce at their local site. The two mailbox servers take about 24 hours to complete their weekly active full backup each Friday night and the resulting VBK is around 6TB. Whilst this data is on disk it's not so bad to restore and the Veeam Exchange Explorer works great. However, we need to get the 6TB VBK off to tape each week at one of the datacentres (so we have an offsite archival copy) and this can take another 24-36 hours to write to an LTO6 SAS Tape Library. We have had a few instances recently where the writing to tape has dropped out after several TBs, forcing us to restart the entire tape copy again, which can result in a day or more of lost time.

With the new Catalyst Integration coming in v9 we are hoping to take advantage of this for faster backups and restores, and also make use of Synthetic Fulls (that new capability with dedupe appliances looks crazy awesome!) so a 24 hour Veeam Backup could be chopped in half at least (so we hope).

What I was thinking was, if possible, to split the exchange backups into a few different Veeam Jobs that result in smaller VBKs and less impact if one tape copy job fails. My thought was:

1. Split Mailbox Databases over several drives on the Mailbox Server (such as Mailboxes 1-4 on E:\, Logs for Mailboxes 1-4 on F:\, Mailboxes 5-8 on G:\, Logs for Mailboxes 5-8 on H:\, and so on)
2. Create Veeam Jobs to backup only the VM disks for the Mailboxes and their logs, resulting in a number of Veeam Jobs with smaller VBKs

With this type of setup the Exchange Explorer would still work we only need to mount the backup for that job and have access to both the EDB files and the logs. However, my concern is that with Application-Aware Backups enabled (which we use and works great), will Veeam instruct Exchange that a backup has occurred and to truncate the logs ONLY for the databases on the disks that were backed up in each job or will it instruct Exchange to truncate logs for all databases? Would my idea work, or am I just asking for trouble?
Posts: 83
Liked: 25 times
Joined: Tue Oct 30, 2012 7:53 pm
Full Name: Chris Jones

Re: Splitting Up Large Exchange VM Backups

Veeam Logoby alanbolte » Wed Oct 07, 2015 7:37 pm

I wouldn't say the following is the final word on this, but here are my thoughts:
We don't have the ability to selectively truncate specific databases, and there's no logic for truncating or not truncating based on which databases were captured in the disk image backup. As such, you would want to set all jobs but one to 'copy only' in the AAIP settings, or else set all of them to 'copy only' and then manage truncation manually, or by some script you write that checks if all jobs were successful.
If you split up the backup among several jobs, you'll want the logs for a particular database in the same job as that database, for ease of application-level restore.
If you split up a VM's data disks among multiple jobs, it's best to include the system disk in all jobs, because we depend on information on the system disk to do application-level restores (this can be worked around, but it's better not to have to bother).
Posts: 635
Liked: 170 times
Joined: Mon Jun 18, 2012 8:58 pm
Full Name: Alan Bolte

Return to VMware vSphere

Who is online

Users browsing this forum: Google [Bot] and 7 guests