Thanks foggy. I forgot to reply back saying that
In the meantime we have another question. Might warrant a new thread but figured here might be best stop first.
We're deploying dd in our cloud host as the copy job destination for some of our customers. There are two goals:
1.As small as possible – they pay per GB backed up and budget is tight.
1a.This means we want the DD to back it up as efficiently as possible
2.Within the window (compute resources aren’t the bottleneck, bandwidth is)
So as small as possible and fast as possible. Easy, right?
That being said from our veeam experience we have a few questions:
1.In the Backup OnPrem job there’s a compression setting and a storage optimization setting.
2.In the Copy Job there’s a compression setting.
3.If we set compression to NONE on the backup job and then extreme on the copy job (while turning on DeCompress on the DD repository), will that be the best for DD’s own compression and deduplication algorithms?
4.Or If we turn on compression for the OnPrem backup job and then also on for the copy job (which will get decompressed upon arrival like above), is that ideal? Our concern is if when it decompresses it upon arrival in the copy job does it decompress just to how it was when the onprem job finished or does it decompress it as if no compression was ever used in backup or copy jobs?
5.How does the Storage Optimization setting factor in? My understanding is this is some sort of chunk/block size setting. Should this be extreme or Local Target IF the top goal is highest compression and deduplication by the DD?