The fix does work for Azure to on-prem, we get 200MB/s+ . However restore from Azure to Azure doenst go passed 18MB/s. According to the engineer this is what his lab also shows as speed and what to expect. Is this really the case as this make it unusable, much to slow restore speeds. And it gets divided by each active restore session.
R&D team is researching the case already. If you don't mind, keep the case open for now.
I will keep you updated on the results of our findings.
Since both restore scenarios use the same data movers, I would primarily suspect some throttling on Azure side. For example, AFAIK certain instance types come with disk I/O limitations. Or perhaps there could be some around Azure networking. However, a Veeam bug impacting the data transfer chain should not be excluded from possibilities either.
Have conducted speed tests. With Azure Proxy on A1 profile gets no more then 170Mb/s with azcopy benchmark. And with a F1 profile we get 3000Mb/s
So the Azure VM profile does have significant impact on the upload&download speed to Azure Blob, the profile upgrade did not make Veeam Restore to Azure blob any faster. Even funnier, while conducting a restore to a storage account with Veeam, we parrallel conducted the test to the same storage account. Veeam got 11MB/s, and azcopy 3000Mb/s . Yes MB vs Mb, but about 30 times faster with azcopy to same target IP and storage account. Support is currently investigating my findings.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
So it looks like we have a bad default value for the number of threads per restored disk. QC were able to get 200MB/s restore speed with just a registry key change. They will play a bit more with this, and let support know good combinations of Azure proxy instance type and this registry value.
Here's what our QC suggests based on the initial testing:
1. Create the AzureDiskProcessingThreadCount = 64 (DWORD) registry value on the backup server (default is 8 threads).
2. Use the Standard_F4s_v2 proxy VM size, as the default A2 is throughput-limited. Standard_F4s should also work well.
Using higher thread count or more powerful proxy VMs did not yield additional performance benefits in their testing.
I've set:
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\AzureDiskProcessingThreadCount = 64 (DWORD / Decimal) on the VBR server. Then started a new restore using a F4 VM size. Did not change the threadcount and did not improve the speed. Do we need a patch or did we missed something?
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Yeah, thats how I wrote it, sorry for the confusion but we did use a Azure Proxy on F4 VM size. When in parallel of a Veeam restore we do a azcopy in the same Veeam Proxy to the same Azure Blob we get 3000Mb/s. We do see the 64 thread setting coming in at the Azure VM, but the amount of TCP threads initiated from VeeamAgent.exe stays around 10. Support is busy analyzing the logs again.
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
I would not worry much about azcopy numbers, as they are not entirely fair: azcopy considers zeroed data blocks as if they were transferred in their numbers, without actually transferring them. Azcopy also uses much larger block size, making it an apple to oranges comparison in any case.
But you should definitely be getting close to 200MB/s VM restore speed from blob storage backups now, as this is what our QC is seeing with the settings above, using the very same Azure so let's see if support is able to find an issue with your config. By the way, are you sure you're restoring from the restore point in blob storage, and not from your local repository?
We got it, you must apply the hotfix in order for the AzureDiskProcessingThreadCount registry setting to work. We are now getting 150MB/s which is much more acceptable then 30MB/s we had before.
I did make a suggestion to support to try to auto-tune the threadcount to get the best restore speed. I'm testing myself to get the optimum number.
See prerequisite for fast Azure restore:
- Apply hotfix to the Azure Proxy, request from support
- F or higher VM type with at least 4 cores
- Apply registry key to VBR server with a threadcount of 64 decimal
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Great to hear! I think we reached 200MB/s because it was a late night testing, with Azure less busy.
Did you mean the same hotfix from earlier in this thread?
We didn't see much further improvement beyond 64 threads, so we will likely just make it the new default value. There doesn't seem to be any reason to auto-tune it, at least based on our results.
I just implemented a Solution at a customer with a VBR on Azure which connects to an Azure Blob Storage which is filled by the on-premise VBR (Desaster Recovery on Azure Solution)
We just tested the first direct Restore to Azure from the Azure VBR (not using an Azure Proxy), it worked very nicely, but the Performance from the Blob Storage was around 12 MB/sec.
Is this a realistic Speed or am i missing some Performance Tuning Options?
1. You do want to use Azure Proxy if you want restore performance.
2. Veeam Backup & Replication 10a has all the above mentioned fixes included. This release is currently in the Early Availability stage, you can open a support case and request the bits.
Yes, as per my blog and the video showing restore scenarios it shows that the quickest restore scenario was using a VBR server running in Microsoft Azure.
I will update with the hotfix and 10a patch results to show how this will now be greatly improved with the Azure Proxy and Veeam Backup & Replication server running in Azure.
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
short summary of setup: onprem server - storage account in azure - 1 Gbps internet connection
copy to performance tier:
now it is 60 MBps in Veeam, 300-700 Mbps on netcard - before with just the fix it was around 26 MBps, just like it was initially
restore directly from storage account (performance tier in maintenance mode):
now it is 110 MBps in Veeam, 1 Gbps on netcard - before with just the fix it was 45 MBps in veeam, initally it was 10-15 MBps
Im happy with the performance at this customer now, and also more confident in using object storage at other customers in the future
That is a super impressive update on restore performance, granted the machine is small but this is in some places halving the time of restore from v10!
The best restored at speed I have seen is 18 MB/s in my lab network.
I will update again when I have the Azure results.
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
For those that are interested I have completed the testing in comparison to the v10 release.
I also went through this Azure based VBR and Proxy testing here - https://www.youtube.com/watch?v=nBxSrA0zxXg whilst also showing that deployment through the Microsoft Azure Marketplace.
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1