Comprehensive data protection for all workloads
Post Reply
theflakes
Enthusiast
Posts: 33
Liked: never
Joined: Jun 09, 2010 11:16 pm
Full Name: Brian Kellogg
Contact:

Restore questions and suggestions

Post by theflakes » Jul 15, 2010 1:11 pm

I had to restore two VMs with Veeam last night; we've only been running Veeam for a week. Both restored successfully so thank you very much, but a couple items for follow up came from this.

The first VM was a win2003R2 box with two 20GB thick hard drives. It took about 35 minutes to restore. The second VM was the same except it had one 20GB thick hard drive and one 40GB hard drive. It took ~1 hour and 35 minutes to restore. I noticed that at some point there was no network activity from the NAS that the backups were stored on, but there was network activity for quite some time on the Veeam appliance; ~50 minutes. It looked like the Veeam appliance was writing white space to the VM disks it was restoring. It takes ~10 minutes each to do a full backup of these two VMs so I'm trying to determine why there is such a disparity between the full backup time and restore time of the VMs especially the second VM?

For the backups I restored from I had switched to NBD from "Appliance hotadd mode" to see if there were any appreciable performance differences. If the backups had been done in appliance hotadd mode would the restores have been faster? We have two Equallogics SATA with 500GB disks clustered together both use RAID 6 with Jumbo frames and three 1GB links from each of the three ESXi Essential Plus servers into the storage SAN using MPIO and flow control is enabled on the storage SAN switches. I've done several very successful performance tests with RAID 6 on the Equallogics before placing them in production so I know that the RAID 6 overhead is not to blame. The IOPs during the restore were not anywhere close to the maximum I know they can hit with mixed reads and writes.

Also I noticed that when restoring Veeam doesn't automatically put each file back to where it originally was. We separate our OS disks and data disks into different data stores. I sincerely hope that the new version of Veeam will by default suggest restoring to the original file locations if one chooses to overwrite the existing VM with the ability to change the destination data store on a per file basis. In a disaster scenario where multiple VMs have to be restored the current procedure would be very painful.

Sorry for the tome...


thanks,
Brian

Gostev
SVP, Product Management
Posts: 25147
Liked: 3700 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Restore questions and suggestions

Post by Gostev » Jul 15, 2010 2:21 pm

Hello Brian,

1. The disparity is due to the fact that our product does not have to actually read zeroed blocks during backup, but it has to write them during restore. Not zeroing empty blocks during restore causes some issues with certain Linux file systems (OS would complain about file system integrity issues) when restoring to "dirty" VMFS LUN.

2. Switching the backup mode does not affect the way restores are performed.

3. We are looking to add this capability, thank you for your feedback!

Anton

theflakes
Enthusiast
Posts: 33
Liked: never
Joined: Jun 09, 2010 11:16 pm
Full Name: Brian Kellogg
Contact:

Re: Restore questions and suggestions

Post by theflakes » Jul 15, 2010 2:51 pm

If having to write zeroed blocks is only a problem with Linux file systems can there be an option to disable this for Windows VM restores or have it automatically disabled for Windows VM restores. We use thin provisioning on our Equallogics so I hope this doesn't mess with that when restoring a VM with a lot of empty space.

I have a file server with over 1TB of empty space. How would you suggest restoring this given the slow restore time with having to write zeroed blocks? Restore the OS drive and then do a file level restore for the file shares as that seems to be much faster? Not trying to be jerk; just trying to get an idea of how to update our disaster recovery procedures/policies.

Gostev
SVP, Product Management
Posts: 25147
Liked: 3700 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Restore questions and suggestions

Post by Gostev » Jul 15, 2010 3:10 pm

Brian, this is not impossible to add, however we find that since vSphere release most customers have switched to using thin-provisioned disks, so I am not sure if enhancing thick disk restore process makes much sense. Anyhow, let's come back to this discussion when you see some full VM restore enhancements we are adding in v5, as I think they might render this feature unneeded.

Here is the thread discussing some ways to improve the restore speed:
Veeam Restore Speed (1TB in ~ 1 hour)

theflakes
Enthusiast
Posts: 33
Liked: never
Joined: Jun 09, 2010 11:16 pm
Full Name: Brian Kellogg
Contact:

Re: Restore questions and suggestions

Post by theflakes » Jul 15, 2010 3:47 pm

Thanks

So I can expect restore speeds to increase dramatically with v5?

Also I did verify that writing the zeroed blocks did mess with the Equallogic thin provisioning. This is disappointing as when we set our environment up VMware and our reseller recommended using thin provisioning on the EQuallogics and not using thin disks in VMware. I suppose in the future if I ever had to restore a VM again I could restore it with thin provisioned disks instead of the original thick disks without causing an issue with the restored VM, correct? If this is the case than I have nothing more to complain about. :)

Gostev
SVP, Product Management
Posts: 25147
Liked: 3700 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Restore questions and suggestions

Post by Gostev » Jul 15, 2010 5:49 pm

theflakes wrote:So I can expect restore speeds to increase dramatically with v5?
Yes, dramatically is a good word to describe the increase: v5 provides an option to restore your VM instantly. I can say as much, but stay tuned for updates from Veeam in the coming weeks :) possibly, video of this functionality even :wink:
theflakes wrote:Also I did verify that writing the zeroed blocks did mess with the Equallogic thin provisioning. This is disappointing as when we set our environment up VMware and our reseller recommended using thin provisioning on the EQuallogics and not using thin disks in VMware.
Makes perfect sense, if this recommendation was made before vSphere release.
theflakes wrote:I suppose in the future if I ever had to restore a VM again I could restore it with thin provisioned disks instead of the original thick disks without causing an issue with the restored VM, correct? If this is the case than I have nothing more to complain about. :)
Yes, you can do this today, with the current version (we allow you to pick the disk type during full VM restore).

theflakes
Enthusiast
Posts: 33
Liked: never
Joined: Jun 09, 2010 11:16 pm
Full Name: Brian Kellogg
Contact:

Re: Restore questions and suggestions

Post by theflakes » Jul 15, 2010 5:54 pm

Thank you for your excellent help and quick response again. I support a lot of diverse products and systems and I can say that Veeam runs one of the most productive and responsive forums that I've ever used.

Yeah the recommendations I received were for vSphere. We brought our vSphere system online last September.

Post Reply

Who is online

Users browsing this forum: No registered users and 78 guests