We have an environment that we manage that is running low on disk space. The backup repositories live on NetApp FAS storage. There is about 150 TB or so of data that needs to be protected, and one of the drawbacks of that repo storage is the NetApp volume size limit of 64 TB. When built, a number of 60 TB volumes were created and added into a Scale-Out repository. Default Veeam backup settings were used with inline dedupe enabled and compression level set to optimal. The NetApp volumes were set to enable dedupe on a daily schedule as well. Initially we expected much better dedupe levels on the NetApp side, but are seeing negligible benefits. We think it may be a combination of a) compression being turned on in Veeam backup jobs and b) Scale-out repo being used but dedupe on NetApp side happening per volume rather than spanned across all volumes.
Does that make sense? In this environment would it be better and more space efficient to turn compression off completely (and possibly Veeam inline dedupe for that matter?) and rather than use Scale-Out repo, manage multiple repositories and manually select which backup jobs go where based on dedupe efficiency?