Host-based backup of VMware vSphere VMs.
Post Reply
MirkoKiel123
Novice
Posts: 4
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

Case #07702759

Hello everyone,

I already have a case open, but we can't find a solution at the moment.

Scenario
We have a physical backup server with Windows Server 2025, 2 x Intel Gold 5515+ CPUs, 128GB RAM, and 12 x NVMe SSDs in RAID 6 as the primary backup repository. The server has 4 x 25GB uplinks (2 x SAN, 2 x Mgmt), MPIO drivers for the SAN are installed, and multipathing is working perfectly. The SAN consists of 2 x Alletra 6010 NVMe AllFlash with synchronous replication.

Our production environment consists of a VMware environment (vSphere 8 Standard) with one vCenter and four ESX hosts. Both VMware and host firmware are up to date.

Backup performance is very good, and with an Active Full in DirectSAN transport mode, we achieve effective data rates of up to 30 Gbit/s (Proxy & Repository: maximum concurrent tasks: 8).

Now for the problem: When restoring, regardless of the transport mode, we only achieve very poor performance of approximately 100 MB/s in DirectSAN, 150 MB/s in nbd, and approximately 150 MB/s in HotAdd mode.

I conducted various tests with Veeam Support, and using vixdisklib-rs.exe, we were able to validate these values ​​when writing using the default blocksize of vixdisklib-rs.exe.

I then ran further tests with vixdisklib-rs.exe with a modified block size in DirectSAN transport mode, and the following results were obtained:

Default block size: Total statistics: processed 10296 MiB in 90408s (113884 MiB/s on average)
Block size 64 KiB: Total statistics: processed 18331875 MiB in 9614s (190679 MiB/s on average)
Block size 128 KiB: Total statistics: processed 1905875 MiB in 9100s (209436 MiB/s on average)
Block size 256 KiB: Total statistics: processed 140225 MiB in 4414s (317652 MiB/s on average)
Block size 512 KiB: Total statistics: processed 1487 MiB in 3,130 seconds (475,148 MiB/s on average)
Block size 1,024 KiB: Total statistics: processed 5,894 MiB in 11,300 seconds (521,574 MiB/s on average)
Block size 2,048 KiB: Total statistics: processed 5,816 MiB in 11,658 seconds (498,872 MiB/s on average)
Block size 4,096 KiB: Total statistics: processed 4,528 MiB in 8,610 seconds (525,916 MiB/s on average)

Then I ran the test and created a new LUN and mounted it on the backup server under Windows. When copying a file of approximately 80 GB from the backup repository to this LUN, I achieved a real performance (I checked the metrics on the storage at the same time as the copy) of approximately 800 MB/s. That would also be the performance I would expect also during the restore.

Even when copying a VM within vCenter, I see a write performance of approximately 600-700 MB/s on the storage.

It's definitely not the backup server's local backup repository; that's more than fast enough:
Read IO

thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 2224753213440 | 4243380 | 3536.15 | 7072.30 | D:\Backup\Test\xxx.vbk (18.40GiB)

Does anyone have any ideas how we can speed up the restore with Veeam? We specifically purchased an all-flash backup server with very good performance to achieve high performance. This is great for backups, but not for restores.

We also want to work a lot with SureBackup/Virtual Labs.

Best regards,
Mirko
david.domask
Veeam Software
Posts: 2744
Liked: 630 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by david.domask »

Hi MirkoKiel123, welcome to the forums.

Thank you for sharing the case number and the detailed summary of what's been done so far. I can see the case is with our Advanced Support Team and was escalated very recently. Please do continue with Support; looks like initially Advanced Support commented on the Windows Copy test, which in general I agree it's often a misleading test, but based on your screenshot looks like any OS caching benefit dropped off pretty fast and the "real" transfer speed was shown.

Reading the case quickly though, I don't see the DirectSAN vixdisklib-rs test results you posted above, the previous test maxed out at ~525 MB/s but looks like your test above you achieved higher results?

It would be best to share the test in your post there with Support if it wasn't already (I may have missed it checking the case) and continue the investigation based on those results/tests, though I must admit some of those numbers for the lower blocksizes look very unusual to me.
David Domask | Product Management: Principal Analyst
karsten123
Service Provider
Posts: 599
Liked: 150 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by karsten123 »

av excludes set -> kb1999?
NWT installed?
do you use jumbo frames? did you check it?
do you use ReFS for your repository?
MirkoKiel123
Novice
Posts: 4
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

@David: I added the results of the test with the different block sizes to the support case on June 4 at 12:08 PM. Or do you mean a different test?
@Karsten: yes, yes, yes & yes
MirkoKiel123
Novice
Posts: 4
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

Update, a few points "disappeared" during the copy & paste in my first post:
Default Blocksize: Total statistics: processed 10296 MiB in 90.408s (113.884 MiB/s on average)
Blocksize 64 KiB: Total statistics: processed 1833.1875 MiB in 9.614s (190.679 MiB/s on average)
Blocksize 128 KiB: Total statistics: processed 1905.875 MiB in 9.100s (209.436 MiB/s on average)
Blocksize 256 KiB: Total statistics: processed 1402.25 MiB in 4.414s (317.652 MiB/s on average)
Blocksize 512 KiB: Total statistics: processed 1487 MiB in 3.130s (475.148 MiB/s on average)
Blocksize 1024 KiB: Total statistics: processed 5894 MiB in 11.300s (521.574 MiB/s on average)
Blocksize 2048 KiB: Total statistics: processed 5816 MiB in 11.658s (498.872 MiB/s on average)
Blocksize 4096 KiB: Total statistics: processed 4528 MiB in 8.610s (525.916 MiB/s on average)
david.domask
Veeam Software
Posts: 2744
Liked: 630 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by david.domask »

MirkoKiel123 wrote: Jun 18, 2025 10:09 am I added the results of the test with the different block sizes to the support case on June 4 at 12:08 PM. Or do you mean a different test?
Aha, I saw that one, but I more meant the test in your opening post here and also the updated test; I did not see those reported on the case when I checked it, but possible I missed it somehow -- just I'm not sure Support is aware of these results, and I think will help with the review to have the results documented in the case.
David Domask | Product Management: Principal Analyst
MirkoKiel123
Novice
Posts: 4
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

I'm not sure which test you mean exactly? In the opening post, there was an error in the test with the different block sizes. I wrote that at 12:27. But the correct values ​​are documented in the support case.
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], Google [Bot] and 13 guests