1) Nimble SFA is advertising 8:1 dedup/compression ratios. Are you seeing those rates?
Having worked on deduplicated storage of one form or another for years now, I don't find their dedupe/compression any better or worse than others. Understand, they're just running lz4 compression which is pretty standard. Then, they are using variable block dedupe. It all is pretty standard stuff. All that to say, it really depends what you are putting in the array. I have seen dedupe ratios (not on the SFA, but dedupe in general) of 50-60x on some work loads... especially mountains of full database backups that are mostly the same. On the one SFA I am seeing 3.6X compression and dedupe with about 2 weeks worth of data. On one volume of entirely PDFs I saw about 5x just in the initial replica which was pretty surprising. (We create a ton of PDFs for faxing from a limited number of templates) I imagine at a month or more I'll see better dedupe for our more standard VM/database data.
Does the dedup/compression occur as the data is being written to disk (e.g. inline)? Or do you have to wait for the Nimble to catch up on dedup/compression after the data has been written?
100% inline. There is no offline or after the fact procession like Exagrid.
3) Is your backup data sitting on VMDKs served off the Nimble?
Veeam snaps the primary Nimble and replicates the data to the SF. We keep a few days of snaps on the primary and multiple days on the SF. Then, for 2nd media and off site purposes, we have copy jobs copying from the SFA to Exagrids daily. So I think to answer your question, yes?