Been looking for information on this everywhere but failed to find much that is relevant. I have read all the posts on here on StorSimple and also the integration stuff from MS and Veeam but still can't get many details.
We have a customer where we are implementing a reasonable sized Veeam solution, about 200TB replicated backup storage for local backup copies stored on a SAN system. They want us to use their existing StorSimple for the long term retention in the cloud via backup copy. We have used them before for little installs, but are concerned about using them for this scale of backup. Does anyone have experience of implementing StorSimple with large backup sets?
Our concerns are many, but the top ones are:
With GFF rotations we are getting into the space of hitting the 500TB logical pre-dedupe limit on the StorSimple, we have looked at being able to do incremental etc, but that seems basically too hard, anyone solve this?
With large data sets we are worried about how the StorSimple will handle the de-dupe in terms of if it will pull back data from Azure to compare the bit blocks, or if it just relies on multi-checksum comparisons, the data pull backs from Azure would be huge if we're not careful.
Any info or experience with something like this would be greatly appreciated.