We have a 2019 Cluster using CSVs connected through MPIO 8Gb FC to a PureStorage all NVMe ("Direct Flash") which makes the bottleneck on reads basically 8Gbps (2x4gbps with MPIO) - however our jobs claim 'source' as the bottleneck and don't even hit 1GB/s.
I'm positive the PureStorage / cluster can transfer and read at a faster rate, so I started to think it might be something with how Veeam interacts with Hyper-V in general that 'slows' things down - but if it were things like compression, dedupe etc it'd be 'proxy' showing as the bottleneck not the Source, correct?
Additionally we've started to deploy ROBO locations with SAS-SSD raid arrays which have absolutely stunning speed on disk performance.
On every one of our hosts, CPU/Memory are all under utilized with no egregious CPU wait times, and run all at least at a 97% relative memory bandwidth (meaning we optimize our DIMMs for the most memory throughput depending on the CPU generation being 4 or 6 channel)
Our repositories never show as bottleneck and all of them have at least an SSD tier or are all SSD, so they can sustain sequential write speeds of >900MB/s - the limiting factor at that point being that most have 2x10Gbps networking between them.
The biggest impact this shows to me is when we do things like replication jobs, or small backups that shouldn't take that long but still might take 5-10 minutes to run simply because it takes Veeam literally 2+ minutes just to 'start' the process on the hosts. When it comes to reading the CBT data it seems to go pretty fast - but even then I feel like it 'should' be faster based on the hardware in question.
As an example - raid-5 SAS-SSD (mixed use) host that I know can run sequential read speeds of >3GB/s:
Code: Select all
Hard disk 1 (450 GB) 118.9 GB read at 135 MB/s [CBT]
On the cluster here are two examples that really make me just scratch my head:
PureStorage SAN volume:
Code: Select all
Hard disk 4 (10 TB) 1.3 TB read at 391 MB/s [CBT]
Code: Select all
Hard disk 4 (17 TB) 14 TB read at 444 MB/s [CBT]
So in theory the PureStorage has the fastest disk backened, followed by the 3PAR, and about equally the SAS-SSD Raid - yet drastically different speeds.
SO is it just that Veeam isn't as 'fast' with Hyper-V? VMWare seemed to not have these sort of delays in getting things going.