Host-based backup of VMware vSphere VMs.
Post Reply
MirkoKiel123
Novice
Posts: 5
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

Case #07702759

Hello everyone,

I already have a case open, but we can't find a solution at the moment.

Scenario
We have a physical backup server with Windows Server 2025, 2 x Intel Gold 5515+ CPUs, 128GB RAM, and 12 x NVMe SSDs in RAID 6 as the primary backup repository. The server has 4 x 25GB uplinks (2 x SAN, 2 x Mgmt), MPIO drivers for the SAN are installed, and multipathing is working perfectly. The SAN consists of 2 x Alletra 6010 NVMe AllFlash with synchronous replication.

Our production environment consists of a VMware environment (vSphere 8 Standard) with one vCenter and four ESX hosts. Both VMware and host firmware are up to date.

Backup performance is very good, and with an Active Full in DirectSAN transport mode, we achieve effective data rates of up to 30 Gbit/s (Proxy & Repository: maximum concurrent tasks: 8).

Now for the problem: When restoring, regardless of the transport mode, we only achieve very poor performance of approximately 100 MB/s in DirectSAN, 150 MB/s in nbd, and approximately 150 MB/s in HotAdd mode.

I conducted various tests with Veeam Support, and using vixdisklib-rs.exe, we were able to validate these values ​​when writing using the default blocksize of vixdisklib-rs.exe.

I then ran further tests with vixdisklib-rs.exe with a modified block size in DirectSAN transport mode, and the following results were obtained:

Default block size: Total statistics: processed 10296 MiB in 90408s (113884 MiB/s on average)
Block size 64 KiB: Total statistics: processed 18331875 MiB in 9614s (190679 MiB/s on average)
Block size 128 KiB: Total statistics: processed 1905875 MiB in 9100s (209436 MiB/s on average)
Block size 256 KiB: Total statistics: processed 140225 MiB in 4414s (317652 MiB/s on average)
Block size 512 KiB: Total statistics: processed 1487 MiB in 3,130 seconds (475,148 MiB/s on average)
Block size 1,024 KiB: Total statistics: processed 5,894 MiB in 11,300 seconds (521,574 MiB/s on average)
Block size 2,048 KiB: Total statistics: processed 5,816 MiB in 11,658 seconds (498,872 MiB/s on average)
Block size 4,096 KiB: Total statistics: processed 4,528 MiB in 8,610 seconds (525,916 MiB/s on average)

Then I ran the test and created a new LUN and mounted it on the backup server under Windows. When copying a file of approximately 80 GB from the backup repository to this LUN, I achieved a real performance (I checked the metrics on the storage at the same time as the copy) of approximately 800 MB/s. That would also be the performance I would expect also during the restore.

Even when copying a VM within vCenter, I see a write performance of approximately 600-700 MB/s on the storage.

It's definitely not the backup server's local backup repository; that's more than fast enough:
Read IO

thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 2224753213440 | 4243380 | 3536.15 | 7072.30 | D:\Backup\Test\xxx.vbk (18.40GiB)

Does anyone have any ideas how we can speed up the restore with Veeam? We specifically purchased an all-flash backup server with very good performance to achieve high performance. This is great for backups, but not for restores.

We also want to work a lot with SureBackup/Virtual Labs.

Best regards,
Mirko
david.domask
Veeam Software
Posts: 2764
Liked: 633 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by david.domask »

Hi MirkoKiel123, welcome to the forums.

Thank you for sharing the case number and the detailed summary of what's been done so far. I can see the case is with our Advanced Support Team and was escalated very recently. Please do continue with Support; looks like initially Advanced Support commented on the Windows Copy test, which in general I agree it's often a misleading test, but based on your screenshot looks like any OS caching benefit dropped off pretty fast and the "real" transfer speed was shown.

Reading the case quickly though, I don't see the DirectSAN vixdisklib-rs test results you posted above, the previous test maxed out at ~525 MB/s but looks like your test above you achieved higher results?

It would be best to share the test in your post there with Support if it wasn't already (I may have missed it checking the case) and continue the investigation based on those results/tests, though I must admit some of those numbers for the lower blocksizes look very unusual to me.
David Domask | Product Management: Principal Analyst
karsten123
Service Provider
Posts: 599
Liked: 151 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by karsten123 » 1 person likes this post

av excludes set -> kb1999?
NWT installed?
do you use jumbo frames? did you check it?
do you use ReFS for your repository?
MirkoKiel123
Novice
Posts: 5
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

@David: I added the results of the test with the different block sizes to the support case on June 4 at 12:08 PM. Or do you mean a different test?
@Karsten: yes, yes, yes & yes
MirkoKiel123
Novice
Posts: 5
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

Update, a few points "disappeared" during the copy & paste in my first post:
Default Blocksize: Total statistics: processed 10296 MiB in 90.408s (113.884 MiB/s on average)
Blocksize 64 KiB: Total statistics: processed 1833.1875 MiB in 9.614s (190.679 MiB/s on average)
Blocksize 128 KiB: Total statistics: processed 1905.875 MiB in 9.100s (209.436 MiB/s on average)
Blocksize 256 KiB: Total statistics: processed 1402.25 MiB in 4.414s (317.652 MiB/s on average)
Blocksize 512 KiB: Total statistics: processed 1487 MiB in 3.130s (475.148 MiB/s on average)
Blocksize 1024 KiB: Total statistics: processed 5894 MiB in 11.300s (521.574 MiB/s on average)
Blocksize 2048 KiB: Total statistics: processed 5816 MiB in 11.658s (498.872 MiB/s on average)
Blocksize 4096 KiB: Total statistics: processed 4528 MiB in 8.610s (525.916 MiB/s on average)
david.domask
Veeam Software
Posts: 2764
Liked: 633 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by david.domask »

MirkoKiel123 wrote: Jun 18, 2025 10:09 am I added the results of the test with the different block sizes to the support case on June 4 at 12:08 PM. Or do you mean a different test?
Aha, I saw that one, but I more meant the test in your opening post here and also the updated test; I did not see those reported on the case when I checked it, but possible I missed it somehow -- just I'm not sure Support is aware of these results, and I think will help with the review to have the results documented in the case.
David Domask | Product Management: Principal Analyst
MirkoKiel123
Novice
Posts: 5
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

I'm not sure which test you mean exactly? In the opening post, there was an error in the test with the different block sizes. I wrote that at 12:27. But the correct values ​​are documented in the support case.
UBX_Cloud_Steve
Service Provider
Posts: 39
Liked: 9 times
Joined: Nov 22, 2015 5:15 am
Full Name: UBX_Cloud_Steve
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by UBX_Cloud_Steve » 1 person likes this post

Similar setup using Purestorage SAN with 2x Cisco 100Gbps links. Jumbo Frames, and all the niceties enabled on the network side.

The result was similar issue with ingest rates (writes) to the backup repository being very fast 16+ GB/s but restores (reads) were abnormally slow 1GB/s.

The solution was to reformat block storage volume so that allocation unit alignment was 1MB.

Disabling file integrity streams in REFS and Defender cloud disabled and specify exclusions.

This improved reads significantly but ultimately moving away from Windows to Linux XFS with REFLINK support on Linux was the best outcome.
________
Steven Panovski
UBX Cloud
emachabert
Veeam Vanguard
Posts: 396
Liked: 169 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by emachabert »

Hi,

I have no experience with Alletra 6K but I have many Alletra 9K or Primera 65x 67X in production so I can give you some tips:
- You should always use Eagger Thick provisionned VM
- You should have dedicated restore volumes for "standard restore" that are not replicated (you restore to it then you vmotion if neeeded. Easier for zeoring and space reclamation post restore also).
- When you want to achieve best performance in restore speed (I mean something arround 1GB/s per VMDK), you need to disable any compression/deduplication on the restore volume.


You said you are using a Peer Persistent setup.
Do you restore to a replicated volume or not ? If so, please test with a dedicated non replicated volume, to see how synchronous write replication is affecting the global write performance.

When testing such use case, to better find the issue and have comparable metrics, use the same VM, with enought used space in it and use a full backup as source.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
NoDramas
Novice
Posts: 3
Liked: never
Joined: Mar 26, 2025 7:53 pm
Full Name: Tim West
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by NoDramas »

Are you VM's thick or thin provisioned?

“If disks are thin-provisioned, Veeam Backup & Replication will write VM data in the Network or Virtual appliance mode”.
REF: Direct SAN Access - User Guide for VMware vSphere
https://helpcenter.veeam.com/docs/backu ... ml?ver=120
MirkoKiel123
Novice
Posts: 5
Liked: never
Joined: Jun 18, 2025 7:11 am
Full Name: Mirko Kobylanski
Contact:

Re: Poor restore performance when restoring in AllFlash environment

Post by MirkoKiel123 »

@ UBX_Cloud_Steve:
Thanks for your feedback. The interesting thing in my case is that the slow read performance only occurs during restores. Otherwise, there are no problems with the read speed. The test with the Diskspd.exe tool provided by Veeam Support resulted in a read speed of approximately 3500 MB/s

@ emachabert
- Thick provisioning Eagger makes no difference
- I ran a test with a dedicated and non-replicated recovery volume -> no improvement
- Compression/deduplication is disabled on the recovery volume

@NoDramas
We tested this with thickly provisioned disks in san mode, of course. Apart from that, I expect better performance in our environment in nbd or hotadd mode as well.

And just to clarify: I've been working as a systems engineer for almost 10 years and have probably implemented hundreds of Veeam instances. So, I'm familiar with the common mistakes. I've also already passed the VMCE certification (I know, that doesn't mean anything :-) ). In similar environments (Allflash SAN, Allflash Backup Repository, etc.), I've never seen such poor restore performance.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Bing [Bot], crackocain, kurtis, Semrush [Bot] and 42 guests