Comprehensive data protection for all workloads
Post Reply
jzemaitis
Influencer
Posts: 17
Liked: never
Joined: Apr 27, 2009 11:08 pm
Full Name: Joe Zemaitis
Contact:

Slow restores with great hardware!

Post by jzemaitis »

I'm currently restore a 70 GB VM, with about 60 GB of actual data. The backup took 37 minutes and the restore is looking like it will take 2-2.5 hours. It backed up at 67mb/sec over san and is restore at 11mb/sec. The Veeam backup server is not taxed in the least. The network hardly goes about 8% usage, the proc's are hardly doing anything and ram usage is very light. The disk system is new and raid 10.

A little more info...

Veeam backup server:
Dual Quad Core E5345 2.33 Ghz HP DL360. 8 GB Ram Windows 2008 x64 R2. 4 GB HBA. New Raid card (not sure the details) attaches to a MSA 1000 via SCSI. I normally run 4 concurrent backup jobs and all jobs run at 40-80mb/sec, so I don't see the backup server being the problem running a single restore job (and no backups).

I'm restore to an ESX 4.0 Update 1 box. It's connceted to a HP EVA 4400, VMFS luns. Dual Quad Core at 3.0ghz. 32GB ram. 4 GB hba.

Any ideas? I don't see any hardware limitation anywhere. I heard VMFS is slow restoring to. Any idea what I can do to get this to an acceptable speed?
Vitaliy S.
VP, Product Management
Posts: 27368
Liked: 2797 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow restores with great hardware!

Post by Vitaliy S. »

Hello Joe,

Could you please try uploading the similar size file to the same datastore using vSphere Client -> Datastore Browser. What performance speed rates are you currently seeing?

Thanks!
Gostev
Chief Product Officer
Posts: 31789
Liked: 7290 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Slow restores with great hardware!

Post by Gostev »

Joe, also check if have provided service console connection settings in the target ESX host's properties (right-click the host in Veeam Backup Servers tree and go into Properties). Service console agent should really speed up the restore speed.
jzemaitis
Influencer
Posts: 17
Liked: never
Joined: Apr 27, 2009 11:08 pm
Full Name: Joe Zemaitis
Contact:

Re: Slow restores with great hardware!

Post by jzemaitis »

hm... I have "force agentless mode" selected. I swear I set it to that for a reason. Not sure why at this point. So, you think that would speed things up a lot? Sorry, I don't have a 70GB file to upload as a test. Does the datastore upload show a rate in mb/s? This restore is still going.... 20 minutes left. Once that's done I'll test a few things out with a much smaller VM to see if that improves things.
Vitaliy S.
VP, Product Management
Posts: 27368
Liked: 2797 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Slow restores with great hardware!

Post by Vitaliy S. »

Joe,

Yes, agentless mode is much slower than agent mode, so please follow instructions provided by Anton, that should help!
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Slow restores with great hardware!

Post by tsightler » 1 person likes this post

Hi Joe,

Certainly you should enable agent mode, I'd suspect you'd see an improvement, but my guess is that will only get you to 20-30MB/sec, but I'll be interested to see how much difference it makes in your case. The "agent" mode will make a much bigger difference on systems with lots of empty space or zero'd space however so you should keep this in mind when testing as restoring systems that are mostly "empty" can be misleading regarding the performance improvement.

My lame analysis of the issue was that VMFS does not support writing to the filesystem from the console OS without taking some type of lock. The ability of the underlying storage system to handling all of the lock/unlock events efficiently is critical to being able to write to VMFS volumes with reasonable performance. Many storage environments appear to have significant latency for this operation and it's this overhead that causes writes to the VMFS filesystems from the console OS to be much slower than writes from within the VM.

Our current workaround is to set aside some space on our backup servers as a "quick restore area". In our current environment we backup to Linux servers that are cross-located between our datacenters, i.e. our backups for DC1 go to a linux server in DC2, and vice-versa. However, on each linux server in each datacenter, we set aside some space for an NFS share and present this NFS share to the ESX hosts as an NFS datastore. If we need to restore a large VM for DC1, rather than restore directly from the Linux server in DC2 to the ESX host, we actually perform the restore from the Linux server in DC2 to the Linux "restore area" in DC1. We then manually register the VM with vCenter and fire it up directly from the NFS datastore, then use Storage vMotion to move it to the SAN while it's running.

With this method we're easily able to hit 100MB/sec per restore to be able to get systems restored and running quickly.
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], rhys.hammond and 48 guests