-
- Product Manager
- Posts: 15339
- Liked: 3321 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
yes, for unknown reason, internal RAID controllers seem to have better "per stream" performance. I don't know why that is, but SAN storage often has lower per-stream performance than much cheaper internal RAID controllers.
There are different technologies to split VMDKs
- split workloads on different volumes (e.g. one database per volume on regular storage without volume manager): supported from all sides
- Linux with LVM: supported from all sides
- Windows with dynamic disks: Google tells me it's supported (from Veeam side, yes), but deprecated feature
- Windows storage spaces: Google tells me unsupported, but I know a bank doing it because they asked for Veeam support (it's not supported for file level recovery from Veeam side. Instant recovery would work though).
There are different technologies to split VMDKs
- split workloads on different volumes (e.g. one database per volume on regular storage without volume manager): supported from all sides
- Linux with LVM: supported from all sides
- Windows with dynamic disks: Google tells me it's supported (from Veeam side, yes), but deprecated feature
- Windows storage spaces: Google tells me unsupported, but I know a bank doing it because they asked for Veeam support (it's not supported for file level recovery from Veeam side. Instant recovery would work though).
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
I have been working with direct SAN from the beginning and it is where we get the best performance either at backup with bfss or restore.
Here you can see a screenshot from june 2022 for single stream restore from apollo 4200 to primera 670 all flash. I dug it from my email when I was sharing the performance with HPE team.

I will look today at the restore speed we get on the alletra9080 as it is tested and measured once a month
Here you can see a screenshot from june 2022 for single stream restore from apollo 4200 to primera 670 all flash. I dug it from my email when I was sharing the performance with HPE team.

I will look today at the restore speed we get on the alletra9080 as it is tested and measured once a month
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
Hi Eric,
Thanks again for checking. The Primera and the 9xxx Alletras are obviously a bit more performant. But the 60xx aren't slow either. They're the "successors" to Nimble Storage, which, as an all-flash variant, also offer sufficient performance for business-critical workloads.
So how is it possible that you can achieve more than 900MB/s with a single stream, while we can't even achieve 100MB/s with the default settings? The performance difference between our storage systems isn't that big. And our storage is bored with the default restore. With different settings, I also get better performance up to about 500MB/s with vixdisklib in SAN mode (see my initial post on this topic). But unfortunately, only when testing with the tool. With the actual restore, a little more is possible only with different settings (4MB backup blocksize, thick eager zero), and we then achieve about 250MB/s with the single-stream SAN restore. On the other hand, a LUN directly connected to the backup server can be written to at a real 800MB+ under Windows. So the SAN is powerful enough for writing.
That's exactly what I don't understand. There must be a hidden bottleneck somewhere. Could it be the vSphere stack? What I haven't mentioned yet: We have an HPE dHCI solution.
Thanks again for checking. The Primera and the 9xxx Alletras are obviously a bit more performant. But the 60xx aren't slow either. They're the "successors" to Nimble Storage, which, as an all-flash variant, also offer sufficient performance for business-critical workloads.
So how is it possible that you can achieve more than 900MB/s with a single stream, while we can't even achieve 100MB/s with the default settings? The performance difference between our storage systems isn't that big. And our storage is bored with the default restore. With different settings, I also get better performance up to about 500MB/s with vixdisklib in SAN mode (see my initial post on this topic). But unfortunately, only when testing with the tool. With the actual restore, a little more is possible only with different settings (4MB backup blocksize, thick eager zero), and we then achieve about 250MB/s with the single-stream SAN restore. On the other hand, a LUN directly connected to the backup server can be written to at a real 800MB+ under Windows. So the SAN is powerful enough for writing.
That's exactly what I don't understand. There must be a hidden bottleneck somewhere. Could it be the vSphere stack? What I haven't mentioned yet: We have an HPE dHCI solution.
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
So I asked one guy on the ops team to test multiple mode with latest version of veeam.
For a single vmdk of 311GB (300GB used)
- direct san to alletra 9080 nvme 300MB/s
- direct san to primera 670 ssd 450MB/s
- hot-add to alletra 9080 : 1GB/s
Looks like direct san is slower now than before for single stream and the ops team confirm it. The tell me depending of the number of disk to restore they either choose hot-add (few disk) or direct san(many disks) to get the best overall throughput.
For a single vmdk of 311GB (300GB used)
- direct san to alletra 9080 nvme 300MB/s
- direct san to primera 670 ssd 450MB/s
- hot-add to alletra 9080 : 1GB/s
Looks like direct san is slower now than before for single stream and the ops team confirm it. The tell me depending of the number of disk to restore they either choose hot-add (few disk) or direct san(many disks) to get the best overall throughput.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
I find it very interesting that your performance has also dropped significantly. Are you also using the latest ESXi (HPE Custom Image) and Veeam?
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
In fact I deep dived the subject yesterday because it was bothering me. And I found something I need to check. The restore volumes used yesterday all have the data reduction enabled (inline dedup and compresion) and the volumes used in 2022 had no data reduction enabled at all.
Since Hannes pointed out the direct san are synchronous writes I think the performance impact of the inline data reduction is lowering avaible iops per single stream, which is not the same when using hotadd on the same datastore.
I will ask the team to test but this won't be done quickly as it involves change management and it won't be prioritized other normal work.
Since Hannes pointed out the direct san are synchronous writes I think the performance impact of the inline data reduction is lowering avaible iops per single stream, which is not the same when using hotadd on the same datastore.
I will ask the team to test but this won't be done quickly as it involves change management and it won't be prioritized other normal work.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
We also performed new test using hot-add on production host with 2x50gb/s for management network and we can reach 2GB/s on primera 670


Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
Since I found the compression approach interesting, I created and tested a new vSphere LUN without deduplication and compression. My other vSphere LUNs only have compression enabled, not deduplication. The results are interesting. The restore is still slow with the default (lazy zeroed) restore, but significantly faster with eager zeroed restore. Block size no longer seems to play a role with eager zeroed restore.
Block size 1MB
Lazy zeroed: Restoring hard disk 1 (100 GB): 28.2 GB restored at 119 MB/s [san] - for comparison, restore to standard LUN 98 MB/s
Eager zeroed: Restoring hard disk 1 (100 GB): 28.2 GB restored at 443 MB/s [san] - for comparison, restore to standard LUN 152 MB/s
Block size 4MB
Lazy zeroed: Restoring hard disk 1 (100 GB): 28.8 GB restored at 238 MB/s [san] - for comparison, restore to standard LUN 260 MB/s
Eager zeroed: Restoring hard disk 1 (100 GB): 28.8 GB restored at 457 MB/s [san] - for comparison, restore to standard LUN 254 MB/s
However, you have to Regarding deduplication and compression, I'd also like to say that the LUN mounted directly on the backup server for testing has both deduplication and compression enabled. I'll try the above test again with hot-add.
Block size 1MB
Lazy zeroed: Restoring hard disk 1 (100 GB): 28.2 GB restored at 119 MB/s [san] - for comparison, restore to standard LUN 98 MB/s
Eager zeroed: Restoring hard disk 1 (100 GB): 28.2 GB restored at 443 MB/s [san] - for comparison, restore to standard LUN 152 MB/s
Block size 4MB
Lazy zeroed: Restoring hard disk 1 (100 GB): 28.8 GB restored at 238 MB/s [san] - for comparison, restore to standard LUN 260 MB/s
Eager zeroed: Restoring hard disk 1 (100 GB): 28.8 GB restored at 457 MB/s [san] - for comparison, restore to standard LUN 254 MB/s
However, you have to Regarding deduplication and compression, I'd also like to say that the LUN mounted directly on the backup server for testing has both deduplication and compression enabled. I'll try the above test again with hot-add.
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
If you look back at my first answer I told you the best perf is when disabling compression and deduplication. I thought you had already done that.
So at 443MB direct san you are having good performance. The test with hot add will be better for sure
So at 443MB direct san you are having good performance. The test with hot add will be better for sure
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
Good morning,
I've now run the tests with hot-add to the LUN without deduplication and compression. The results are within the range I expect. But why is performance slowed down so much by compression? The CPU load and disk accesses on the storage are very low in the other scenarios.
Block size 1MB
lazy zeroed: Restoring Hard disk 1 (100 GB) : 28.2 GB restored at 899 MB/s [hotadd] 32 seconds
eager zeroed:estoring Hard disk 1 (100 GB) : 28.2 GB restored at 1 GB/s [hotadd] 25 seconds
Block size 4MB
lazy zeroed: Restoring Hard disk 1 (100 GB) : 28.8 GB restored at 1 GB/s [hotadd] 25 seconds
eager zeroed: Restoring Hard disk 1 (100 GB) : 28.8 GB restored at 1 GB/s [hotadd] 23 seconds
I've now run the tests with hot-add to the LUN without deduplication and compression. The results are within the range I expect. But why is performance slowed down so much by compression? The CPU load and disk accesses on the storage are very low in the other scenarios.
Block size 1MB
lazy zeroed: Restoring Hard disk 1 (100 GB) : 28.2 GB restored at 899 MB/s [hotadd] 32 seconds
eager zeroed:estoring Hard disk 1 (100 GB) : 28.2 GB restored at 1 GB/s [hotadd] 25 seconds
Block size 4MB
lazy zeroed: Restoring Hard disk 1 (100 GB) : 28.8 GB restored at 1 GB/s [hotadd] 25 seconds
eager zeroed: Restoring Hard disk 1 (100 GB) : 28.8 GB restored at 1 GB/s [hotadd] 23 seconds
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
My understanding, as Hannes said earlier, is that direct san use synchronous write and that the little overhead of time used for compression makes the overall throughtput lower.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
Yes, I understand that. But even with asynchronous writes in hot add mode, I achieve less than 400MB (eager zeroed, 1MB block size) on a LUN with compression enabled. So, we have a speedup of about 2.5 times when compression is disabled. According to HPE, enabled compression shouldn't have any impact on the performance of the storage systems. Only deduplication can cost 5-10% of the overall performance, but we have that disabled on our production LUNs.
What do you think, is it the storage or the vSphere stack, or does it have something to do with Veeam?
What do you think, is it the storage or the vSphere stack, or does it have something to do with Veeam?
-
- Veeam Vanguard
- Posts: 407
- Liked: 171 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
I have allways seen performance decrease on storage arrays doing inline data reduction, hpe or other vendor. The impact is always acceptable or invisible for normal workloads but become a bottleneck with restore.
But having data reduction enabled on 90x0 or primera 6x0 we still hit 1 to 2GB/s at restore on non replicated lun as I showed you.
At that point I would engage with HPE with the metrics you have and ask for explanation why compression is having such an impact. Perhaps the firmware you are runing on the array has an issue.
Your Veeam Platform is doing well.
But having data reduction enabled on 90x0 or primera 6x0 we still hit 1 to 2GB/s at restore on non replicated lun as I showed you.
At that point I would engage with HPE with the metrics you have and ask for explanation why compression is having such an impact. Perhaps the firmware you are runing on the array has an issue.
Your Veeam Platform is doing well.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2025
-
- Influencer
- Posts: 20
- Liked: 2 times
- Joined: Jun 18, 2025 7:11 am
- Full Name: Mirko Kobylanski
- Location: Germany
- Contact:
Re: Poor restore performance when restoring in AllFlash environment
I opened a case with HPE on Friday afternoon. I'll report back here as soon as I have the results.
Who is online
Users browsing this forum: Google [Bot] and 16 guests