-
- Novice
- Posts: 4
- Liked: never
- Joined: Feb 04, 2025 8:51 am
- Full Name: Guillaume NCT
- Contact:
Poor backup performance compared to the repository's theoretical capacity.
Hello everyone,
I'm trying to identify a problem or check if I might have missed something in my VEEAM server configuration.
- I'm running Veeam Backup & Replication 12.2.0.334 on a Windows Server 2019 VM, hosted on our vSAN cluster.
- The repository disk on the VM is mounted via iSCSI from a Synology NAS, formatted in ReFS 64k on Windows. The NAS storage consists of five 20TB Seagate drives configured in SHR.
- The NAS is connected via 10GbE, but we're investigating a potential faulty SFP module, as the link speed isn't optimal.
- That said, we do get at least 2-3Gbps of bandwidth.
The issue:
- When Transferring files manually from a local folder on the VM to the iSCSI-mounted disk, speeds are quite good, peaking above 400MB/s.
- However, during a backup job, transfer speed seems capped at ~90MB/s, with the bottleneck showing as "Target".
Would anyone be able to suggest where I should look for a potential configuration bottleneck?
Thanks in advance, y'all have a great day.
I'm trying to identify a problem or check if I might have missed something in my VEEAM server configuration.
- I'm running Veeam Backup & Replication 12.2.0.334 on a Windows Server 2019 VM, hosted on our vSAN cluster.
- The repository disk on the VM is mounted via iSCSI from a Synology NAS, formatted in ReFS 64k on Windows. The NAS storage consists of five 20TB Seagate drives configured in SHR.
- The NAS is connected via 10GbE, but we're investigating a potential faulty SFP module, as the link speed isn't optimal.
- That said, we do get at least 2-3Gbps of bandwidth.
The issue:
- When Transferring files manually from a local folder on the VM to the iSCSI-mounted disk, speeds are quite good, peaking above 400MB/s.
- However, during a backup job, transfer speed seems capped at ~90MB/s, with the bottleneck showing as "Target".
Would anyone be able to suggest where I should look for a potential configuration bottleneck?
Thanks in advance, y'all have a great day.
I am Root.
-
- Veeam Software
- Posts: 2706
- Liked: 626 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Hi Guillaume,
First, just a tip don't use Windows copy to test any sort of data transfer performance. Windows (most OSes in fact) do a lot to boost improvement. There's an older Windows blog post I reference for this, but it more or less confirms the same.
I would start with a simple diskspd test: https://www.veeam.com/kb2014
Use the full backup and Synthetic backup tests, though I recommend testing with a larger file size (50 GiB minimum in my experience, but as close to the backup file size as you can manage is best for accurate results)
Main idea is to identify a baseline first and see what other proper testing shows for performance.
Similarly, it would be best to open a Support case and let Support review the situation. They may be able to spot interruptions or other potentially related items in the logs, or at least help rule out any Veeam misconfiguration which may be at play. (Though I'm doubtful based on the description that this is on the Veeam side). Remember to include logs for Support to review. (Use the 1st radio button and select a single job targeting this repository affected by the behavior)
Please share the case number once created. Thanks!
First, just a tip don't use Windows copy to test any sort of data transfer performance. Windows (most OSes in fact) do a lot to boost improvement. There's an older Windows blog post I reference for this, but it more or less confirms the same.
I would start with a simple diskspd test: https://www.veeam.com/kb2014
Use the full backup and Synthetic backup tests, though I recommend testing with a larger file size (50 GiB minimum in my experience, but as close to the backup file size as you can manage is best for accurate results)
Main idea is to identify a baseline first and see what other proper testing shows for performance.
Similarly, it would be best to open a Support case and let Support review the situation. They may be able to spot interruptions or other potentially related items in the logs, or at least help rule out any Veeam misconfiguration which may be at play. (Though I'm doubtful based on the description that this is on the Veeam side). Remember to include logs for Support to review. (Use the 1st radio button and select a single job targeting this repository affected by the behavior)
Please share the case number once created. Thanks!
David Domask | Product Management: Principal Analyst
-
- Veeam Legend
- Posts: 522
- Liked: 145 times
- Joined: Apr 22, 2022 12:14 pm
- Full Name: Danny de Heer
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Can you check the load of the VBR server while running a backup, I'm guessing you have mounted the Synlogy LUN on the VBR server itself and using that as a repository.
So the server is being used as a VBR job manager, Backup proxy, and backup Repository.
Can you also share the compute specs of this VM?
So the server is being used as a VBR job manager, Backup proxy, and backup Repository.
Can you also share the compute specs of this VM?
VMCE / Veeam Legend 2*
-
- VP, Product Management
- Posts: 7235
- Liked: 1551 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Check as well https://www.veeam.com/kb1999 for Antivirus settings. As well deactive network scanning of the Antivirus.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Feb 04, 2025 8:51 am
- Full Name: Guillaume NCT
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Hello everyone, thank you for your feedback and please excuse me for only getting back to you now, as priorities had forced me to neglect this topic a bit
For comparison the stats from the last job are :
"Processing rate 202MB/s" (btw never fully understood that metric, is that just an average ?)
And timeline "throughput" shows read speeds up to 260MB/s and transfert speed up to 96MB/s
---
8 cores Xeon Gold 6248
16Go of RAM
System storage (vSAN) on Gen 3 NVMe drives
Loads never hit the ceiling.
---
And lastly no Antivirus on this servers.
Thank you all very much for your time and ideas.
Have a great day
david.domask wrote: ↑Feb 04, 2025 10:57 am First, just a tip don't use Windows copy to test any sort of data transfer performance.
I primarily use iperf3 to test the speed between the VEEAM VM and the NAS. I mentioned this transfer example to add another oddity to the list. However, I'll definitely note diskspd to see what it shows from a system perspective, thank you. I also have a test job with just a copy of 2 test VMs to test under real conditions when I have doubts.david.domask wrote: ↑Feb 04, 2025 10:57 am I would start with a simple diskspd test: https://www.veeam.com/kb2014
Use the full backup and Synthetic backup tests, though I recommend testing with a larger file size (50 GiB minimum in my experience, but as close to the backup file size as you can manage is best for accurate results)
Here's an initial result. I'm surprised that diskspd measures so little.david.domask wrote: ↑Feb 04, 2025 10:57 am Main idea is to identify a baseline first and see what other proper testing shows for performance.
Code: Select all
Results for timespan 1:
*******************************************************************************
actual test time: 240.00s
thread count: 4
CPU | Usage | User | Kernel | Idle
----------------------------------------
0| 6.11%| 2.67%| 3.44%| 93.89%
1| 9.54%| 5.82%| 3.72%| 90.46%
2| 6.74%| 3.22%| 3.52%| 93.26%
3| 4.93%| 2.42%| 2.51%| 95.07%
4| 5.44%| 3.36%| 2.08%| 94.56%
5| 6.58%| 2.48%| 4.10%| 93.42%
6| 5.57%| 2.55%| 3.02%| 94.43%
7| 7.51%| 3.46%| 4.04%| 92.49%
----------------------------------------
avg.| 6.55%| 3.25%| 3.30%| 93.45%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 550191104 | 67162 | 2.19 | 279.84 | 14.295 | 31.748 | U:\testfile.dat (50GiB)
1 | 543899648 | 66394 | 2.16 | 276.64 | 14.459 | 31.182 | U:\testfile.dat (50GiB)
2 | 552312832 | 67421 | 2.19 | 280.92 | 14.239 | 31.031 | U:\testfile.dat (50GiB)
3 | 556072960 | 67880 | 2.21 | 282.83 | 14.143 | 30.967 | U:\testfile.dat (50GiB)
-----------------------------------------------------------------------------------------------------
total: 2202476544 | 268857 | 8.75 | 1120.22 | 14.283 | 31.233
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 412377088 | 50339 | 1.64 | 209.74 | 19.065 | 35.411 | U:\testfile.dat (50GiB)
1 | 408944640 | 49920 | 1.62 | 208.00 | 19.222 | 34.666 | U:\testfile.dat (50GiB)
2 | 414064640 | 50545 | 1.65 | 210.60 | 18.985 | 34.561 | U:\testfile.dat (50GiB)
3 | 418168832 | 51046 | 1.66 | 212.69 | 18.799 | 34.464 | U:\testfile.dat (50GiB)
-----------------------------------------------------------------------------------------------------
total: 1653555200 | 201850 | 6.57 | 841.03 | 19.016 | 34.777
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 137814016 | 16823 | 0.55 | 70.09 | 0.024 | 0.011 | U:\testfile.dat (50GiB)
1 | 134955008 | 16474 | 0.54 | 68.64 | 0.024 | 0.015 | U:\testfile.dat (50GiB)
2 | 138248192 | 16876 | 0.55 | 70.32 | 0.026 | 0.059 | U:\testfile.dat (50GiB)
3 | 137904128 | 16834 | 0.55 | 70.14 | 0.026 | 0.053 | U:\testfile.dat (50GiB)
-----------------------------------------------------------------------------------------------------
total: 548921344 | 67007 | 2.18 | 279.19 | 0.025 | 0.041
Total latency distribution:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.005 | 0.006 | 0.005
25th | 7.270 | 0.017 | 0.043
50th | 11.304 | 0.023 | 8.636
75th | 18.906 | 0.030 | 15.548
90th | 34.617 | 0.039 | 28.556
95th | 55.124 | 0.045 | 45.351
99th | 137.893 | 0.058 | 120.186
3-nines | 524.720 | 0.080 | 481.134
4-nines | 683.269 | 0.512 | 671.659
5-nines | 829.437 | 6.800 | 829.437
6-nines | 858.281 | 6.800 | 858.281
7-nines | 858.281 | 6.800 | 858.281
8-nines | 858.281 | 6.800 | 858.281
9-nines | 858.281 | 6.800 | 858.281
max | 858.281 | 6.800 | 858.281
"Processing rate 202MB/s" (btw never fully understood that metric, is that just an average ?)
And timeline "throughput" shows read speeds up to 260MB/s and transfert speed up to 96MB/s
I will eventually but I wan't to try here first in case someone thought about something very dumb that I could have missed.david.domask wrote: ↑Feb 04, 2025 10:57 am Similarly, it would be best to open a Support case and let Support review the situation. They may be able to spot interruptions or other potentially related items in the logs, or at least help rule out any Veeam misconfiguration which may be at play. (Though I'm doubtful based on the description that this is on the Veeam side). Remember to include logs for Support to review. (Use the 1st radio button and select a single job targeting this repository affected by the behavior)
---
Yes indeed, the server handles all the roles, but based on my research, it should be within the specifications for what we're asking of it. Plus, as already mentioned, there was a time when it did have the expected performance :mjr.epicfail wrote: ↑Feb 04, 2025 12:39 pm Can you check the load of the VBR server while running a backup, I'm guessing you have mounted the Synlogy LUN on the VBR server itself and using that as a repository.
So the server is being used as a VBR job manager, Backup proxy, and backup Repository.
Can you also share the compute specs of this VM?
8 cores Xeon Gold 6248
16Go of RAM
System storage (vSAN) on Gen 3 NVMe drives
Loads never hit the ceiling.
---
And lastly no Antivirus on this servers.
Thank you all very much for your time and ideas.
Have a great day
I am Root.
-
- Veeam Software
- Posts: 2706
- Liked: 626 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Hi guitom,
No worries, everyone has tasks to attend to, so delays are expected
As for the results, indeed, it looks like the actual r/w performance is quite bad -- since your test includes both read and write, I'm guessing this means you used the Synthetic Full test -- notice the read latency as the test proceeds, you get almost 860 ms per read it looks like at some point, so I don't think it's about network here, it's about writes to the iscsi mount, hence why iperf probably was fine but diskspd was not.
I would maybe review with the storage vendor the diskspd results and point-out how the read latency seems to plateau and never drop, I think that's likely why you're seeing bad performance on the Veeam operations, and once that is solved, likely you'll see better speeds.
You may also want to check the System/Application logs for the server the iscsi mount is connected to; maybe the connection is silently dropping or unstable.
No worries, everyone has tasks to attend to, so delays are expected

As for the results, indeed, it looks like the actual r/w performance is quite bad -- since your test includes both read and write, I'm guessing this means you used the Synthetic Full test -- notice the read latency as the test proceeds, you get almost 860 ms per read it looks like at some point, so I don't think it's about network here, it's about writes to the iscsi mount, hence why iperf probably was fine but diskspd was not.
I would maybe review with the storage vendor the diskspd results and point-out how the read latency seems to plateau and never drop, I think that's likely why you're seeing bad performance on the Veeam operations, and once that is solved, likely you'll see better speeds.
You may also want to check the System/Application logs for the server the iscsi mount is connected to; maybe the connection is silently dropping or unstable.
David Domask | Product Management: Principal Analyst
-
- Novice
- Posts: 4
- Liked: never
- Joined: Feb 04, 2025 8:51 am
- Full Name: Guillaume NCT
- Contact:
Re: Poor backup performance compared to the repository's theoretical capacity.
Thank you for this feedback and initial analysis. The lead seems very promising. Indeed, having meanwhile tried to study the problem from a few other angles of attack, my attention is turning toward the target Synology. I will investigate where this latency problem might be coming from. Thank you again, and I'll be sure to post the solution here if I eventually figure it out.
I am Root.
Who is online
Users browsing this forum: Amazon [Bot], Bing [Bot] and 59 guests