Hello everyone,
I already have a case open, but we can't find a solution at the moment.
Scenario
We have a physical backup server with Windows Server 2025, 2 x Intel Gold 5515+ CPUs, 128GB RAM, and 12 x NVMe SSDs in RAID 6 as the primary backup repository. The server has 4 x 25GB uplinks (2 x SAN, 2 x Mgmt), MPIO drivers for the SAN are installed, and multipathing is working perfectly. The SAN consists of 2 x Alletra 6010 NVMe AllFlash with synchronous replication.
Our production environment consists of a VMware environment (vSphere 8 Standard) with one vCenter and four ESX hosts. Both VMware and host firmware are up to date.
Backup performance is very good, and with an Active Full in DirectSAN transport mode, we achieve effective data rates of up to 30 Gbit/s (Proxy & Repository: maximum concurrent tasks:

Now for the problem: When restoring, regardless of the transport mode, we only achieve very poor performance of approximately 100 MB/s in DirectSAN, 150 MB/s in nbd, and approximately 150 MB/s in HotAdd mode.
I conducted various tests with Veeam Support, and using vixdisklib-rs.exe, we were able to validate these values when writing using the default blocksize of vixdisklib-rs.exe.
I then ran further tests with vixdisklib-rs.exe with a modified block size in DirectSAN transport mode, and the following results were obtained:
Default block size: Total statistics: processed 10296 MiB in 90408s (113884 MiB/s on average)
Block size 64 KiB: Total statistics: processed 18331875 MiB in 9614s (190679 MiB/s on average)
Block size 128 KiB: Total statistics: processed 1905875 MiB in 9100s (209436 MiB/s on average)
Block size 256 KiB: Total statistics: processed 140225 MiB in 4414s (317652 MiB/s on average)
Block size 512 KiB: Total statistics: processed 1487 MiB in 3,130 seconds (475,148 MiB/s on average)
Block size 1,024 KiB: Total statistics: processed 5,894 MiB in 11,300 seconds (521,574 MiB/s on average)
Block size 2,048 KiB: Total statistics: processed 5,816 MiB in 11,658 seconds (498,872 MiB/s on average)
Block size 4,096 KiB: Total statistics: processed 4,528 MiB in 8,610 seconds (525,916 MiB/s on average)
Then I ran the test and created a new LUN and mounted it on the backup server under Windows. When copying a file of approximately 80 GB from the backup repository to this LUN, I achieved a real performance (I checked the metrics on the storage at the same time as the copy) of approximately 800 MB/s. That would also be the performance I would expect also during the restore.
Even when copying a VM within vCenter, I see a write performance of approximately 600-700 MB/s on the storage.
It's definitely not the backup server's local backup repository; that's more than fast enough:
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 2224753213440 | 4243380 | 3536.15 | 7072.30 | D:\Backup\Test\xxx.vbk (18.40GiB)
Does anyone have any ideas how we can speed up the restore with Veeam? We specifically purchased an all-flash backup server with very good performance to achieve high performance. This is great for backups, but not for restores.
We also want to work a lot with SureBackup/Virtual Labs.
Best regards,
Mirko