The storage can easily still be the bottleneck because, based on your setup. Veeam measures bottlenecks at 4 points, source, proxy, network, and target. In you're setup there's no network traffic (both proxy and source are on the same server so it uses shared memory, which is still way faster than flash), and the target is likely writing 50% less data than is being read from the source since the data is compressed, so that's unlikely to be the bottleneck. Proxy, is the measure of CPU time spent on the proxy, which is unlikely to be very high if your total throughput is only 146MB/s.
You still have to read the uncompressed data from the array over whatever type of interconnect you are using to read data from the array, and it's that interconnect that's the most like candidate and I would agree that 146MB/s seems somewhat slow for an all flash array.
Unfortunately, I don't have enough information to really make even an educated guess, so for now I'll just ask a lot of questions:
What type of initiators are you using?
Do you have specific links dedicated for both ingest (reads from the source array), vs egress (writes to the target array)?
Is your proxy tuned to minimize response time by disabling things like interrupt mitigation on the HBA/network adatpers?
Are you running many VMs in parallel and is 146MB/s the aggregate speed?
Are full backups just as slow (incremental backups are notoriously difficult to judge speed because sometimes they read so little data it's hard to calculate a true throughput?
How much total change was in the job that reporte 146MB/s, i.e. read vs transferred?