Comprehensive data protection for all workloads
Post Reply
dwoerz
Novice
Posts: 3
Liked: never
Joined: Jan 24, 2011 8:46 pm
Contact:

Large VM Backup is slow?

Post by dwoerz »

Hopefully someone out there can help me with figuring this out... I have done some searches but nothing seems to fit...

I have a file server VM that has a 40GB OS, 250 GB Data, and a ~2TB Data vmfs files associated with it. I have veeam B&R setup to backup this vm to a local raid 5 disk system made up of 6 2TB sata drives. The initial backup took days and a week later it is trying to do a synthetic full backup and here we are over 60 hours later and it is saying it is only 50 percent done with a processing speed of 8MB a second. The backup status is listed below.

5 of 6 files processed

Total VM size: 2.26 TB
Processed size: 1.14 TB
Processing rate: 8 MB/s
Backup mode: SAN/NBD with changed block tracking
Start time: 1/22/2011 10:00:47 PM
Time remaining: 16:16:27

Backing up object "[VMFS-SAKFile] SAKFile/SAKFile_1.vmdk"


In the setup of the job i have inline dedupe turned off, compression is set to optimal and target is set to wan. the veeam agent is only registering around 15 percent and the iscsi nic is only registering 1-3 percent utilization. Where do i look to further troubleshoot this? I can only assume that it is trying to dedupe but there isnt a whole lot to dedupe given that the data is mostly mpg video files, however i cannot seem to find a button to turn dedupe off.

any help is appreciated.


san is a dell equallogic ps4000x, iscsi switches are dell 6224, veeam server is a dell powervault nx3100.
Gostev
Chief Product Officer
Posts: 31561
Liked: 6724 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Large VM Backup is slow?

Post by Gostev »

There are no issue backing up very large VMs (we have specifically tested this use case recently). If it is the only VM that is slow, it would point to some issues with source data retrieval speed (slow LUN, iSCSI NIC failed over to 100Mb, MPIO related performance issues). If all VMs are this slow, then the issue with target storage speed (for example, controller or its settings).

Source disk read speed test via vStorage API can be performed by creating a test job with changed block tracking disabled in the advanced settings, and backing up the same powered off test VM twice. First pass will be full, while second pass will read whole disk to determine changes with maximum possible speed, but will not write anything to target since VM was powered off and there are absolutely no changes since the last pass. Make sure test VM disks are stuffed with data (no empty blocks), disable both dedupe and compression for this test.

Lastly, I definitely would not recommend using "WAN" target optimization when backing up to "Local" target ;) especially if you are in quest looking for better backup performance. WAN optimization is specifically designed for remote backups, when WAN link is a primary bottleneck, and as a result significant slowdown coming from additional processing overhead of using smaller block size does not really matter. You processing speed already reaches 100Mbps, which is a bit too fast for an average WAN link :D

Thanks.
Post Reply

Who is online

Users browsing this forum: alex1992, Amazon [Bot], Bing [Bot], Google [Bot] and 125 guests