I just did a test restore from Veeam v8 and had this ongoing restore speed issue when using the SAN restore mode.
I then did the exact same restore using the RTM Build of Veeam v9, also using SAN mode, and found that the restore which was maxing at maybe 70MB/sec in v8 ran at over 200MB/sec in v9. I need to do some more testing but this is a great sign. I hadn't run the vmkfstools command to zero the VMware Datastore which typically made the first restore in v8 go fast, so this is even better. Here are some points I did notice:
1. The Veeam Proxy mounts the SAN Volumes as Offline in Disk Management. The restore would failover to Network mode. I had to manually bring the disks on Online (but not initialize) for the restore to use SAN. I keep reading this shouldn't have to be done manually but has always been the case in my experience with every version of Veeam)
2. The source VM is Thick Eager Zeroed (as is best practice as we use HP 3PAR Storage). In v8 the restore would be Thick Lazy Zeroed. In v9 you still don't get a choice to select Thick Eager as a disk type in the restore wizard, but keeping "Same as source" selected resulted in a Thick Eager Zeroed VM that was restored.
3. I am speculating, but I believe the Thick Eager restore is the reason for the increased speed. When you restore to a new VM, Veeam first creates a new VM and then takes a snapshot, and then restores over the top of the VMDKs. Since the new VM that was created is Thick Eager all blocks on the storage are zeroed out by ESXi when the VM is created, so the restore doesn't have to check each block when restoring to ensure that block is free to write to, which is what was causing the slow restores in v8.
So far so good with the testing of Veeam v9 and SAN Restore Speeds.