Comprehensive data protection for all workloads
Post Reply
Greg Lamb
Novice
Posts: 9
Liked: 5 times
Joined: Mar 04, 2015 9:57 pm
Full Name: Greg Lamb
Contact:

SureBackup Virtual Lab Questions/Best Practises

Post by Greg Lamb »

Hi Veeam Community,

We are starting to experiment with the Veeam SureBackup Virtual Lab technology as a new to approach development, quality assurance and sandbox environments to solve a number of pain points associated with setting up and maintaining these environments by cloning or deploying net new VMs.

We Run:

A multi-host ESXi cluster with a dvS
Production Storage = EMC VNX
Backup Repository = EMC Data Domain w/ DD Boost
Veeam Backup & Recovery = Virtual Machine
Veeam Backup Proxy = Physical Server
Transport Mode = Direct SAN (FC)
vLab Method = Advanced Multi-Host connected to a dvS (since we do not utilize standard switches that would have connected uplinks)

First off a big thank you to the Veeam Support team for helping me troubleshoot a problem where vLab VMs were bleeding into the production network (SR# 00996602). We found that it was a combination of the Virtual Lab Isolated Networks properties VLAN settings (requiring a non-routable VLAN) and using VMXNET3 NICs in which VMware has as a known issue when cloning... and Microsoft releasing a hotfix.

http://kb.vmware.com/selfservice/micros ... Id=1020078
https://support.microsoft.com/en-us/kb/ ... kb/2550978

Due to a limitation when using a Data Domain as backup repository because of the in-inline deduplication hydration/dehydration performance I created a LUN on our production storage. When creating a vLab an Ad-hoc backup is taken and SureBackup job targets these backups.

Questions:

Do other people use vLabs for this use case?

What is your experience, lessons learned…?

What rule of thumb do you use for how long to keep the vLab running? I envision up to a couple of weeks in most scenarios.

When the vLabs are running how likely are we to run into troubles with redo log size growth (dependant on Guest OS usage)?

Is there a redo log datastore sizing methodology?

Does performance of the vLab VMs degrade as the redo logs grow in size?

Since the VMDKs are attached as independent non-persistent disks to preserve the original backup copy, is there any way to commit the redo logs or create a copy of the VMDK to shrink the size?

When defining the properties of a vLab we have to target a specific host. Is there a way to target a cluster instead since we utilize DRS and dvS? My concern is in the event that we are running multiple vLabs concurrently we would need to manually assess host resource availability and DRS may vMotion other production VMs to new hosts (snapshotting VMs during production hours).

If we needed a longer term environment (for example the duration of a project ie. 6 months…) is there still a way to utilize the Veeam proxy appliance?

What kind of performance do you see in the vLab environment? Production systems have direct SAN access where the vLab uses the Backup Repository (2x8Gb FC) -> VBP (4x1GB) -> Core Switch Stack (2x10GB) -> ESXi cluster.

When creating an Ad-Hoc backup job for the test environment vLab use case should we turn the job compression level to none and turn off inline data deduplication to reduce the amount of overhead to run the VM? (best performance)
niels.engelen1
Lurker
Posts: 1
Liked: never
Joined: Aug 26, 2015 7:37 pm
Contact:

Re: SureBackup Virtual Lab Questions/Best Practises

Post by niels.engelen1 »

A lot of our customers use vlabs for patching. Mostly Exchange servers and inhouse applications.

In regards to performance this depends on the storage underneath. In your case it should be sufficient and able to handle actions you want to perform.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider] and 234 guests