Looking for some opinions on how to best satisfy our backup requirements.
Running VMware and Veeam 6.5
Main office
Two virtual backup proxies with 2vCPUs
One physical machine running the full install of Veeam. This box is 72GB of RAM, 4 CPU and 40TB of total storage space (2x 15TB and 1x 10TB volumes) running Windows 2012
Offsite location
One physical machine with 72GB of RAM 4 CPU and 40TB of total storage space (2x 15TB and 1x 10TB volumes) running Windows 2012
The two locations are connected via a 100Mb connection.
Backing up approx 7TB of VMs
The requirement is to keep a minimum of 14 days worth of backups in the main office and weekly backups for 6 months in the offsite location.
A couple of the file servers and Exchange server are very large and the initial Veeam backup will take approx 20 hours of 10Gb connection in the main office to complete. Don’t want to think about what this would be like over the 100Mb connection.
Here is my thought
In the Main office continue to use the two virtual proxies as well as the main Veeam installation to do the 14 day backups to the local machine. The backups are broken up into categories and not one gigantic backup. Such as the AD servers are their own job, Exchange is its own job, Sharepoint servers their own job and so on. Each job is set as Reversed Incremental due to the long run of having to do a full backup on the machines, I only have to do it once in this setup. On the Storage tab of the job I have set compression to Extreme and Optimize for WAN even though it is local simply to shrink the file size the most.
On the remote site, I plan on installing a proxy agent on that box. Set up new backup jobs with the tag Weekly (eg, Active Directory-Weekly) in the Veeam server. Select the remote machine as the desired proxy on the job and set these also for Reversed Incremental with Extreme compression and Optimized for WAN. Set the retention for 26 restore points for my 6 month retention requirement.
Is this the best way to do this or does someone have a better way of doing this same thing?
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 11, 2013 1:20 pm
- Full Name: P Smith
- Contact:
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 11, 2013 1:20 pm
- Full Name: P Smith
- Contact:
Re: Suggestions on a Local/Offsite Backup Design
Doing some more reading, I believe that maybe for remote site I could switch to regular Incremental and then have the Synthetic fulls run on the weekend to keep a weekly full offsite.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Suggestions on a Local/Offsite Backup Design
What are the bottleneck stats for your jobs? I would assume that it is mostly due to Extreme compression than due to the data transfer itself.millerjohnson wrote:A couple of the file servers and Exchange server are very large and the initial Veeam backup will take approx 20 hours of 10Gb connection in the main office to complete. Don’t want to think about what this would be like over the 100Mb connection.
I'm not sure that selecting Extreme level of compression for local backups is justified as it puts extremely high load on your proxy servers (which are only 2vCPU each) and will put even more after adding remote jobs. It will save you only about 3% of space at the expense of noticeable (double) CPU load. I guess you could use Win2012 deduplication to have that space saving on your repository instead.
Just to clarify, you need Veeam agents on both ends for optimal remote backups. For that, you need to add remote box to the console as Windows-type backup repository and have proxy servers on the source side be responsible for VM data retrieval from the source storage.millerjohnson wrote:On the remote site, I plan on installing a proxy agent on that box. Set up new backup jobs with the tag Weekly (eg, Active Directory-Weekly) in the Veeam server. Select the remote machine as the desired proxy on the job
Also, you could use seeding for your remote jobs to eliminate the need to send the whole backup files over 100Mb link.
Hope this helps.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 11, 2013 1:20 pm
- Full Name: P Smith
- Contact:
Re: Suggestions on a Local/Offsite Backup Design
Playing around with the compression simply because I have only 40TB of available space and 7TB of backed up data that I have to keep 26 restore points on. Haven't done anything with Windows DeDupe yet. That is on my list of testing things to see if that buys me any space on the local and remote side of things.
We will be seeding the system locally and the transport to the remote site. Trying to make sure I don't have anything that would cause more data to transfer on the wire than needed.
We will be seeding the system locally and the transport to the remote site. Trying to make sure I don't have anything that would cause more data to transfer on the wire than needed.
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: Suggestions on a Local/Offsite Backup Design
I'm backing up and replicating about 6TB-7TB of data over 100mbit link right now and keep 60 days
It's doable, but you have to make sure the rate of data change on the systems is not too high. For some reason our backups of 2012 server are much bigger than 2008 file servers. Something is changing a lot of blocks, but we haven't had a chance to investigate. Depending on your disk system's performance - your bottleneck can be either network or target storage depending on your backup type. We have set agressive compression to minimize the data sent across the wire, but we have 8 vCPUs assigned.
We've seeded all data and shipped servers and storage after that to the off-site.
It's doable, but you have to make sure the rate of data change on the systems is not too high. For some reason our backups of 2012 server are much bigger than 2008 file servers. Something is changing a lot of blocks, but we haven't had a chance to investigate. Depending on your disk system's performance - your bottleneck can be either network or target storage depending on your backup type. We have set agressive compression to minimize the data sent across the wire, but we have 8 vCPUs assigned.
We've seeded all data and shipped servers and storage after that to the off-site.
Who is online
Users browsing this forum: Google [Bot] and 300 guests