hope you are well
I'm here to share one of my experiences with VAL.
I've a virtual infrastructure where Veeam is doing a very good job
but I've also a huge 60 TB Linux (RHEL 6 ) physical server
the first full where painful (more than 12 hours fort à 3TB FS) +> average "reading" speed of 60MB/s (not better for writing)
We are using "volume level" LVM backups but "file level" or "entire computer" are not better.
- first steps of investigation
- destination storage performances : backup repository is a REFS pool which is a able to sustain much more.
- connection speed : connectivity between Veeam Proxy & Linux Server is 2*10GB
- Source Storage : Linux server is connected to an (old) SAN array but the average speed for reading or writing is like 10 times faster than 60MB/s
- nexts steps
- changing aggressivity of backup process on Linux +> no improvements
- we even went to checking partition alignment with the storage
I'm not very sure of the effectiveness of that test because in my case the storage array doesn't provide "best practises" for block size
we finished that investigation by no improvements at all
BUT this was only the "fist season" of my issues with VAL but we figured out that after taking the first "very painful" Full the incremental were going smoothly
but unfortunately this week for other reasons we decided to patch to Veeam 10a and going with that update we have also updated VAL to version 4.0.1
now it's even worst because the incremental that were taking 40 minutes (only reading changed block) are now taking 8 hours (not only the first incremental after upgrade). So we are back with our performance issue and now also with incremental backups (not only full)
if someone has a good advice
PS : Case ID #02028022