I am considering Veeam for our environment, and after using the demo for a few days I have a few questions.
Our environment:
Three sites:
Site A: 3 hosts, 25 VMs, ~6T of data
Site B: 3 hosts, 15 VMs, ~1.4T
Site C: 8 hosts, 117 VMs, ~7.5T
T3 between sites, Cisco WAAS devices (WAN accelerators).
OSes - Mostly Windows, about 15 linux boxes.
Based on a 6 VM test (3 linux, 3 windows), 2 of which do not yet have change block tracking, I am getting about 100MB/s backup speeds using a single 64bit Windows VM, 4 CPU, 4G Ram, as the backup server in VBA mode. CPU utilization is about 60%, using about 2G of RAM.
Goals:
* 1 day RTO and RPO
* Offsite the data, daily or weekly
* Dedupe to save on disk space
* Inexpensive backup disk storage (currently using an OpenSolaris box serving NFS to ESX hosts).
* Retention:
Daily for 31 days
Weekly for 8 weeks
Monthly for 13 months
Yearly for 3 years
My questions:
1. What is the best configuration of backup servers?
I am thinking one for sites A and B, and two for site C (or maybe just two parallel jobs?). Doing the math tells me a single appliance, single job, would backup site C in about 13 hours. We use Equallogic arrays, and since I cannot figure out how to do a read-only mode on the volumes, I will probably stick with using VBA mode, not SAN mode.
2. What is the best way to get the data offsite? Should I backup across the WAN (site A's VBA backing up site B, etc., possibly with a local linux box to do the synthetic processing), or use the backend storage to replicate the finished backup set, using rsync or similar?
3. How do I guarantee Linux backups? Windows has VSS, but for Linux, it looks like crash consistent backups only?
4. Any difference backing up to an NFS mount, or backing up to a vmdk image on NFS? I know, I should test this…
5. What is the best retention setup? I am thinking of three jobs. Dailies for 56 days to cover weekly and daily backups in a single job. Then separate jobs for the monthly and yearly backups, and I would either script those or just manually kick them off.
6. What class of backend storage do I need? Currently my OpenSolaris box as a target is pretty fast, but it is really just a dev box. Can I get away with an inexpensive NAS like a QNAP, Synology, Iomega, etc?
Thanks for help with any of the above, I really appreciate it!
Best,
Scott
-
- Novice
- Posts: 5
- Liked: never
- Joined: May 24, 2010 7:11 pm
- Full Name: Scott Meilicke
- Contact:
-
- Veteran
- Posts: 259
- Liked: 8 times
- Joined: Sep 18, 2009 9:56 am
- Full Name: Andrew
- Location: Adelaide, Australia
- Contact:
Re: Setup and design questions for multiple sites
Hi Scott,
Similar setup here with 2 primary sites over 2Mb link (see my recent post on our Riverbed acceleration which should be similar to the WAAS):
http://www.veeam.com/forums/viewtopic.php?f=2&t=2607
Our basic methodology is to Backup locally to (QNAP) NAS (allow fast RTO's), replicate to the other site as the 'off-site' storage / DR site. So each site acts as the DR site for the other.
VBR server at each end (running VBA mode). Each VBR server runs one job for local backups, and a replication job to replicate *from* the other site which allows the VBR database (and therefore jobs) to be accessible at the DR site in times of an outage (however, this job is actually running from the source site at the moment - we plan to move it to the VBR server at the remote site if performance still acceptable)
For the remainder of your questions:
2. The replication job through Riverbed is giving us 90% reduction, so we're currently getting 700GB worth of VM replicated across the wire in about 3 hours (30GB change data -> 1-3GB over the wire). That said, you may not require the 'failover' capacity of Veaam and therefore are happy to replicate your backups (rather than do Replica jobs) - in which case you could use SAN replication, NAS replication (if available - the QNAP's can do this), DFS-R, or RSync
3. Will leave that to the Linux gurus
4. This was painful for us and a drawback of Veeam compared to other products which do 'native' backups to NFS. An NFS mount ties you to an ESX host so will fail if one of the hosts goes down. VMDK adds another level of encapsulation - in times of an outage we'd like direct access to the backup files rather than having to mount it - etc. We ended up having to backup to a CIFS share exposed on the NAS (eg \\<NAS-IP\VBRBackups\). Found it quite suprising that Veeam didn't better support NFS targets.
http://www.veeam.com/forums/viewtopic.php?f=2&t=3306
5. We had the same requirements, and again not something that Veeam supports out of the box which our previous vendor id. There's been a few requests for better retention policy support (eg auto-purging based on tiered retention policies such as yours.). At the moment you have to work around this with multiple jobs.
http://www.veeam.com/forums/viewtopic.php?f=2&t=3378
6. This probably comes down to feature sets. For us we wanted dual power, dual nic, reasonable RAID implementation and throughput, NFS/CIFS targets, replication functionality (if replicating between NAS to remote site) and external backups to attached USB HDD (we run this weekly incase the NAS crashes / data corruption). The QNAP range ticked all the boxes and has worked quite well, although I'd stay away from their iSCSI implementation.
Hope this helps.
Similar setup here with 2 primary sites over 2Mb link (see my recent post on our Riverbed acceleration which should be similar to the WAAS):
http://www.veeam.com/forums/viewtopic.php?f=2&t=2607
Our basic methodology is to Backup locally to (QNAP) NAS (allow fast RTO's), replicate to the other site as the 'off-site' storage / DR site. So each site acts as the DR site for the other.
VBR server at each end (running VBA mode). Each VBR server runs one job for local backups, and a replication job to replicate *from* the other site which allows the VBR database (and therefore jobs) to be accessible at the DR site in times of an outage (however, this job is actually running from the source site at the moment - we plan to move it to the VBR server at the remote site if performance still acceptable)
For the remainder of your questions:
2. The replication job through Riverbed is giving us 90% reduction, so we're currently getting 700GB worth of VM replicated across the wire in about 3 hours (30GB change data -> 1-3GB over the wire). That said, you may not require the 'failover' capacity of Veaam and therefore are happy to replicate your backups (rather than do Replica jobs) - in which case you could use SAN replication, NAS replication (if available - the QNAP's can do this), DFS-R, or RSync
3. Will leave that to the Linux gurus
4. This was painful for us and a drawback of Veeam compared to other products which do 'native' backups to NFS. An NFS mount ties you to an ESX host so will fail if one of the hosts goes down. VMDK adds another level of encapsulation - in times of an outage we'd like direct access to the backup files rather than having to mount it - etc. We ended up having to backup to a CIFS share exposed on the NAS (eg \\<NAS-IP\VBRBackups\). Found it quite suprising that Veeam didn't better support NFS targets.
http://www.veeam.com/forums/viewtopic.php?f=2&t=3306
5. We had the same requirements, and again not something that Veeam supports out of the box which our previous vendor id. There's been a few requests for better retention policy support (eg auto-purging based on tiered retention policies such as yours.). At the moment you have to work around this with multiple jobs.
http://www.veeam.com/forums/viewtopic.php?f=2&t=3378
6. This probably comes down to feature sets. For us we wanted dual power, dual nic, reasonable RAID implementation and throughput, NFS/CIFS targets, replication functionality (if replicating between NAS to remote site) and external backups to attached USB HDD (we run this weekly incase the NAS crashes / data corruption). The QNAP range ticked all the boxes and has worked quite well, although I'd stay away from their iSCSI implementation.
Hope this helps.
-
- Chief Product Officer
- Posts: 31707
- Liked: 7212 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Setup and design questions for multiple sites
Hi Scott,
3. Our customers are using VMware prefreeze/postthaw scripts to quiesce transactional applications (you can search forum for more details on this).
Thanks!
3. Our customers are using VMware prefreeze/postthaw scripts to quiesce transactional applications (you can search forum for more details on this).
Thanks!
-
- Novice
- Posts: 5
- Liked: never
- Joined: May 24, 2010 7:11 pm
- Full Name: Scott Meilicke
- Contact:
Re: Setup and design questions for multiple sites
Andrew and Anton, the next time you are in Seattle I owe you a beer. Great answers, thanks so much.
Andrew, good idea regarding replication to your DR site. I had not thought of that, but like it better in some ways than replicating the backup blobs from Veeam, as the VM would be ready to go in a DR situation. Have you had any issues with the speed of the QNAP during the consistency check? Actually I'm not sure if you can see that separately from the backup within Veeam. VDR from vmware consistency checks took ages...
Thanks again, I really appreciate the assistance.
-Scott
Andrew, good idea regarding replication to your DR site. I had not thought of that, but like it better in some ways than replicating the backup blobs from Veeam, as the VM would be ready to go in a DR situation. Have you had any issues with the speed of the QNAP during the consistency check? Actually I'm not sure if you can see that separately from the backup within Veeam. VDR from vmware consistency checks took ages...
Thanks again, I really appreciate the assistance.
-Scott
-
- Chief Product Officer
- Posts: 31707
- Liked: 7212 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Setup and design questions for multiple sites
I heard that complain about VDR before quite a few times... this is definitely not an issue with Veeam Backup though.smeilicke wrote:VDR from vmware consistency checks took ages...
Who is online
Users browsing this forum: Semrush [Bot] and 36 guests