[HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication]
"VddkConfigPath"="c:\temp\vddkkeys.txt"
[HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication]
"VddkExtraFlags"=dword:00000001
No these are new settings, 12.1.2 already creates them by default. You need to REMOVE these to rollback to pre-12.1.2 NBD configuration and see if that helps to restore performance.
As I understand the discussion, the suspicion we're trying to check here is that these new default settings in 12.1.2 may reduce performance in certain environments for unknown reasons, instead of doubling NBD performance, which was what we saw with all beta customers.
[HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication]
VMwareNBDUnbufferedMode = 0 (REG_DWORD)
we still investigate why the 12.1.2 settings are slower for some customers. Rolling back for all customers would make it slower for most customers which would be bad.
We are having the same issue - Case #07270614. Rep performance absolutely cratered after updating. We have tried everything escalation offered us to try, and they are now indicating the need to do packet capture to see if there is some network equipment dropping packets - something about a known issue with Palo Altos using signature based blocking that see traffic from Veeam after the update as changed enough to treat as suspicious
I have my doubts about this, because the vddk test was exactly what we'd hope to see on reads/writes and if I manually remove a rep and then run the the affected jobs to create a complete new replica it runs at normal expected speeds. If the same machine is processed by the same job but for normal diff replica, then the 2 to 3 hours of processing becomes 12 to 20 hours of processing.
We created new replica jobs, we created entirely new full replicas, we cleared veeam's local cache data and rebuilt it, etc. No change.
Escalation has been working hard on this, but it seems like they have exhausted their options and their tools are not revealing the causes of this. Is R&D and app dev support engaged on these issues?
Yes but we're unable to reproduce any replication performance degradation between 12.1.2 and previous versions in any of our labs, in fact we see the opposite... besides, we have very few cases on this issue for the number of 12.1.2 downloads. So all signs point to this being something environment-specific - and these sort of issues always take a while to understand, requiring lots of painful/annoying troubleshooting directly in the affected environments.
Same problem here. Actually finally did the v11->v12 upgrade, not sure if old replication tasks from the v11 could be the cause of it. Backups are still running exactly the same, but replications are very slow, 1/4th the speed.
Just a FYI - We are seeing the same issue since upgrading yesterday. We have 2 seperate companies - BOTH systems appear impacted (2.5x as long to run replications). I'll do a bit more digging, and try recreating the replication jobs to compare results, but the update is the only change Im aware of. Replication is from Vmware esxi7 on HP Primera Storage, using fibre attached proxy (direct SAN) to destination esxi 8.0.2, build 2238047.
May I ask you to reach out to our support team as well so that they can check logs and find the "bottleneck"? Also, please share a support case ID over here for our reference.
Bejaminlee wrote: ↑Jun 18, 2024 7:28 pm
We have tried everything escalation offered us to try, and they are now indicating the need to do packet capture to see if there is some network equipment dropping packets - something about a known issue with Palo Altos using signature based blocking that see traffic from Veeam after the update as changed enough to treat as suspicious
I've seen signature based firewalls cause weird issues with both replicas and offsite backups before. They're often frustrating to track down as well, because they don't always trigger alerts on the firewalls.
I'd be curious what firewall people are using in these cases.
Gostev wrote: ↑Jun 18, 2024 12:19 pm
No these are new settings, 12.1.2 already creates them by default. You need to REMOVE these to rollback to pre-12.1.2 NBD configuration and see if that helps to restore performance.
As I understand the discussion, the suspicion we're trying to check here is that these new default settings in 12.1.2 may reduce performance in certain environments for unknown reasons, instead of doubling NBD performance, which was what we saw with all beta customers.
Hi,
I was searching for those settings and only can find the ini-file with the described content, but no Reg-Keys like mentioned in the post of nicolas.pro.
I will now try to implement the settings like shown by HannesK
The config file/registry changes worked to retore our previous performance (woo!).
Will we need to re-enable these changes after future Veeam updates - assuming the updates don't fix the issue? Or will these be persistent through upgrades?
This only needs to be done on the VBR server since that's the one doing the replication to NBD mode.
This should cause the config.ini file settings to be ignored.
Change 2:
To be extra sure, you can remove And Save the file to a temp location:
C:\Program Files (x86)\Veeam\Backup Transport\x64\vddk_7_0\config.ini
Do not leave it in the folder with a new name suffix incase some odd code causes it to pick it up with a different name. This file is new in V12.1.2 and is not there in 12.1.1.56. All it has is the two lines controlling the buffer size and count.
Thanks for the confirmation. We will see how best to approach this b/c the new settings you disabled really improve NBD backup and restore performance on all ESXi versions, only reducing incremental replication performance on ESXi 8... I hope devs will find a way to use different settings for different operations in 12.2.
Hello,
We are planning to update our cloud connect environment from 12.0 to 12.1.2. Replication via NBD will also be performed here. We use ESX 7 environment.
@Gostev, do I understand correctly that the problem for replication here only affects from version ESX 8?
Case #07297057
we have been facing since upgrading to Veeam 12.1.2.
our replication jobs from ESXi 7.0.3 to Veeam Cloud Connect 12.1.2 with ESXi 7.0.3 have been experiencing poor performance.
The source proxy is configured for direct SAN access, while the destination proxy is set to NBD.
Additionally, we have noticed an unusually high IOPS on the destination storage (HPE Primera replica LUNs).
Bejaminlee wrote: ↑Jun 21, 2024 12:18 pm
The config file/registry changes worked to retore our previous performance (woo!).
Will we need to re-enable these changes after future Veeam updates - assuming the updates don't fix the issue? Or will these be persistent through upgrades?
This only needs to be done on the VBR server since that's the one doing the replication to NBD mode.
This should cause the config.ini file settings to be ignored.
Change 2:
To be extra sure, you can remove And Save the file to a temp location:
C:\Program Files (x86)\Veeam\Backup Transport\x64\vddk_7_0\config.ini
Do not leave it in the folder with a new name suffix incase some odd code causes it to pick it up with a different name. This file is new in V12.1.2 and is not there in 12.1.1.56. All it has is the two lines controlling the buffer size and count.
That´s the identical way, I went on friday. Since those changes are active, all Replication Jobs are running as fast as before the upgrade.
We are running ESXi 7.0.3 with iSCSI-Storage behind and NBD configured on the destination-proxy
Hello,
I had the same performance issues on my site. Support's only solution was, to setup VMs as proxy server. This worked for me, because it increase the speed from 5-8MB/s to 400-500MB/s. I hope support found a solution, so I can use our hardware proxies for replication again.
(replication from SAN storage to SAN storage 160GB/s bandwidth in between)
Best regards
Volker
(vCenter 8.0.2 Build 23504390, 6 x ESXi 7.0.3, Build 23794027)
one Veeam physical host is connect with 2 x 10GB/s and the other with 2 x 25GB/s
96GB RAM and 2 x 8Core XEON CPU
The issue discussed in this topic is with vSphere VM replication only, and only if target backup proxy uses NBD transport.
@Mods please clear this thread at your earliest convenience from all off-topic like earlier mentions of Backup Copy jobs, File Share backup jobs etc. as I see that they start causing some real confusion.
hayliz wrote: ↑Jun 24, 2024 5:24 am
@madbana, did you try to delete the config file and change the registry key?
after delete the config file content and the registry ket, the replication speed back to normal (fast) and IOPS on destination storage back to normal (low)
I can confirm that both the registry setting and config.ini change were needed for us on our Cloud Connect environment. Prior to the change, we were getting below 10MB/s max on any one disk. After the change, I was seeing rates as high as 70MB/s.
Also, if anyone is running Linux proxies, the location of the config.ini file is at /opt/veeam/transport/vddk_7_0/config.ini. I copied the config.ini prior to making the change on the proxies so when a fix is created, I can put the config.ini back to original state.
I just wanted to chime in and say we are experiencing the same replication issue but we are on ESXi 7.0.3q and not 8. I have not implemented the workaround as of yet. Will these changes require a reboot?
I was under the impression that the changes only needed to be made to the backup server? Do I need to make changes on the proxies as well? If so, which changes?
Just want to +1 this issue. We were seeing the same thing on replication jobs with NBD and the latest version. After making the changes provided here and rebooting the proxies the speed is back to where it was before (10x or more improvement).