I need some help understanding why I am seeing a performance hit on my SureBackups.
My backup files are stored on a 19 TB Volume with data deduplication enabled.

Everything older than 3 days is dedupped.

I use reversed incrementals as my backup method and create once a month a real Full.
My assumption was that if I create a new (synthetic) full each day my last .VBK file is never a dedupped file.
However, when I use SureBackup to test my latest backup I see quite a performance hit. This makes the Surebackup extremly slow.
In the performance monitor I see a lot of disk access on the "chunck store"
Normal disk access should be about 350 MB/s on this volume on a non-dedupped file, while on a dedupped file it drops to approx 40-80 MB/s
The hardware should be more than sufficient for this purpose.
The Virtual Lab is running on the local Veeam Backup Server with the HyperV role installed. The storage volume is made of local disks in a RAID-5 setup.
The backup server itself is a HP Proliant dl380 with 80 GB memory.
Can someone help me understand why my assumption is incorrect? Why is it trying to rehydrate the file? Is there anything I can do to improve performance?
Thanks for any thoughts you might have.
Remko