Host-based backup of VMware vSphere VMs.
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Shestakov »

Christian,
Christian33 wrote:Must we create a vm as backup proxy at the esx host ?
I would give it a try. Thanks!
agrob
Veteran
Posts: 380
Liked: 48 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob »

Are there any news on this topic? I've tested serveral restore configurations. the best Performance i get if i create a new datastore and restore to this. then i get this perf:

Restoring Festplatte 2 (60,0 GB): 59,6 GB restored at 106 MB/s
Restoring Festplatte 1 (60,0 GB): 33,6 GB restored at 92 MB/s

If i create a second test restore job with exactely the same Settings (hotadd mode, same esx host, same datastore etc) then i get those performance data:

Restoring Festplatte 1 (60,0 GB): 8,0 GB restored at 12 MB/s (did abort because the Performance was that bad)

talked to a colleage from another Company and he is facing same issues.

it would be interessting to figure out what exactely is the Problem because restore Speeds from 10 or 20 or 30 mb/s are not really cool ;)
thanks
agrob
Veteran
Posts: 380
Liked: 48 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob »

just a short note after some more testing
it has (at least on my environment but i think it is a genereall "problem") to do with datastores which are located on a thin provisioned volume on the storage sytsem. (as also mentoined from someone earlier in this thread)
if i create a thick VV on the 3par sytem i can restore with the same speed as i restore to the esxi host internal disk system which is arround 80MB/s. Its not very fast but better than 10-20MB/s.
if we have a NEW thin provisioned VV on the 3par, first restore is about the same speed as on the local disks. but only if i have a NEW, empty datastore. if i delete the restored vm from this store and restore it again, it slows down to about 11MB/s. if i leave this vm now on the datastore and do another restore, it is fast again because data is written to an area on the datastore (VV) which was not "allocated" before. Then i can delete both vms and do a new restore to this "empty" datastore, i have again about 11MB/s
On a thick provisioned volume i can restore, delte, restore with always the same performance from arround 80MB/s.
It maybe has to do with space reclamation... i'll test that if as well..
chjones
Expert
Posts: 117
Liked: 31 times
Joined: Oct 30, 2012 7:53 pm
Full Name: Chris Jones
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by chjones »

Yes, this was exactly our findings and confirmed via a support case with Veeam which got opened also with VMware. It has to do with calls needing to be made to the esxi host before writing a block of the vmdk to check which block to write to next.

We confirmed running a space reclaim on the datastore before a restore does allow a fast restore, but needs to be run prior to every subsequent restore. We have multiple 16TB datastores and the reclaim can take many hours to complete.

Our quick fix for now is to present a smaller new datastore to restore to, restore fast to that, then storage vmotion to one of the larger datastores. The storage vmotion is ridiculously fast!

Or, just use hotadd or network restore modes. They don't suffer the issue. We have 10GbE which is faster than our 8Gb fibre storage so network is never a bottleneck for us.
agrob
Veteran
Posts: 380
Liked: 48 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob »

thanks chjones. the test i've done is with hotadd mode and i have those issues there as well.... anyway if i create a new empty datastore thick provisioned, i can do 3 restores in a row (restore, delete, restore, delete, restore) and i always get good Performance. If i do the same on a thin provisioned volume only first restore is fast, others slow down. After space reclaim on the thin provisioned volume restore is fast again. i think i'll do the same as you, present a new empty datastore and restore to this, then storage vmotion. as the restored vmdk files are thick provisioned lazy zeroed we have to do a "conversion" to thick provisioned eager zeroed anyway which can be done with the move to the prod datastore...

not very nice but at least we know now where the problem is.

if i get things right, then the esx host does not tell the storage System which blocks are deleted... we have to do this manually... something sales never tell ;-))))

anyway, thanks for clarify this!
dellock6
Veeam Software
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by dellock6 »

Another suggestion I've posted also elsewhere in these forums, you can use an NFS storage for restores. It doesn't suffer for VMDK metadata updates. From there then, again, you can do a storage vmotion.

Luca
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Christian33
Influencer
Posts: 10
Liked: never
Joined: Dec 29, 2014 8:01 am
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by Christian33 »

Shestakov wrote:Christian, I would give it a try. Thanks!
Hello together,

i have tested the restore to a empty storage with a local proxy on the esx host. It is the same isue. We have a 14 MB/s restore rate.

Do you have another idea ?
agrob
Veteran
Posts: 380
Liked: 48 times
Joined: Sep 05, 2011 1:31 pm
Full Name: Andre
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by agrob »

Hello together,

i have tested the restore to a empty storage with a local proxy on the esx host. It is the same isue. We have a 14 MB/s restore rate.

Do you have another idea ?
Have you made a new datastore?
please try this on the empty datastore:
Login to a esx host with ssh (putty)
browse to the datastore Directory (/vmfs/volumes/<Volumename>)
execute "Vmkfstools –y 99" (please be carefully when doing this on a non empty datastore!!)
after that, try the restore again

more Infos about the commands above (works for esxi 5.1) you can find here
http://blogs.vmware.com/vsphere/2012/04 ... ction.html
aaron@ARB
Expert
Posts: 138
Liked: 14 times
Joined: Feb 21, 2014 3:12 am
Full Name: ARBCorporationPtyLtd
Contact:

[MERGED] How to speed up full VM restore?

Post by aaron@ARB »

Veeam,

What is the best way to restore large VM's back into the original VM environment with different names and datastores? I have 2 VM's that total about 3tb to restore but the restore that I am doing at the moment is running at 77mb/sec using the 1gb NBD link. Now I understand this is because I have a combination of thin and think provisioned disks which means that the thin provisioned disks are always going to be NBD where as the thick disks will be SAN (I have the luns directly mapped to the host as this is how I do the backup). Since the restores are ran one disk at a time (why is that anyway? why cant you restore multiple disks at once?) this restore is going to take a very long time. Is there a quicker way to do restores rather than the simple full VM restore so that I can at least use the 10gb SAN links that I have??

I tried to restore the individual VMDK's but they are wanting to restore to the local backup proxy/server rather than the VM environment itself. Whats the quickest way to re-inject these VM's into the environment?

cheers and thanks :)
aaron@ARB
Expert
Posts: 138
Liked: 14 times
Joined: Feb 21, 2014 3:12 am
Full Name: ARBCorporationPtyLtd
Contact:

Re: How to speed up full VM restore?

Post by aaron@ARB »

Reading the other current(ish) thread about restoring I tried an instant VM recovery and then vMotion (we have the enterprise plus license if it matters) and im not sure how you actually tell what speed you are getting (backup server/proxy is a 2k8R2 physical server) but its about 2-3% of the 10gb connection as reported by task manager and resource monitor doesn't seem to single out the 10gb connection so i cant seem to get an accurate figure on the throughput suffice to say that I normally see task manager reporting about 30% utilisation when backing up and that roughly equates to 200mb/sec as reported by veeam, so one way or another, i suspect it is not running that fast.

The disk storage that I am writing back to is a new Compellent SC4020 with write intensive SSD's and the backup server is a Dell 720xd with 32gb RAM and an MD1220 array filled with 24 spindles configured in R10.

I just installed a virtual proxy on a VM within the environment. I see about 90mb/sec when using this method but it is using the NBD transport which is a single 1gb connection as opposed to the 10gb SAN backbone which I am guessing is what would be expected?

Backing up data quickly from VMWare seems to be the easy part, restoring it back at the same speed seems to be a big headache for a lot of people. You would think that VMWare would spend a little more time in making it simple in both directions as if you ever had to restore data in case of a real DR / emergency situation, you would not have the time to mess about with different restore methods and tampering to find the quickest method.
aaron@ARB
Expert
Posts: 138
Liked: 14 times
Joined: Feb 21, 2014 3:12 am
Full Name: ARBCorporationPtyLtd
Contact:

Re: Slow Restore Speed - 27MB/s - Tips/Ideas?

Post by aaron@ARB »

I started a new thread when really i should have used this one (i didn't see it prior to posting and i can't see how to delete my post) but I am having the same sort of issue (I think). I can get 90mb/sec from my 1gb NBD restore connection but I am trying to get it up to the 200 odd that I get when backing up. I have tried hotadd (used NBD also it would seem), instant VM then migrate which is also very slow. my main VMProxy/backup server is a well spec'd physical machine (8 core Xeon, 32gb RAM, MD1220 w/24 spindles in R10 etc etc) and a 10gb iscsi fibre connection back to the dedicated SAN network. The compellent supports hardware acceleration if that helps/matters. I have also tried the restore with VAAI disabled to see if that helped (which it did not)

The next thing will be to create another 10gb LAN with additional VMkernel ports and try and speed it up that way if I am forced to use NBD but it would be good to be able to load the restored data back into the SAN (Compellent S4020 w/write intensive SSD's) via SAN mode that actually runs at a good speed. At the moment I am getting 72mb /sec but that is using the 1gb frontend interface. If we had a read disaster i dare say i would not want to restore the data this way. At least this is what DR tests are for.

What is the generally agreed quickest way to restore data back into a VM environment (with different names i.e with _restored on the end)?
Post Reply

Who is online

Users browsing this forum: kratos, Semrush [Bot] and 87 guests