It's VERY critical to understand that Windows 2012 dedupe is not really designed for high data ingest rates. In general, the Microsoft recommendation is 100GB/hr, so that means, assuming you use the default 8 hour dedupe process, you can only process about 800GB a day. You can tweak the default job schedule to run more, but even at 24 hours that's only 2.4TB/day. How big are your full backups?
Windows dedupe can scale past 100GB/hr by using multiple datastores and running dedupe jobs concurrently (each dedupe job will only use a single core) but of course that would be a separate dedupe pool.
In other words, comments that it "works great" don't really take into account the impacts of scaling beyond smallish repositories (say 10TB or less), and with your repositories (30TB and 60TB) I'd have to assume that you are ingesting a significant amount of data. I doubt that you have completed a full pass at this point based on your savings rate.
I'm working on a whitepaper with some guidelines but it probably won't be ready for a few more weeks as it takes time to test various setups, but for data of your size, it would likely involve splitting up the volumes into smaller chunks (perhaps 16TB each) and running a dedupe job on each. Note that this might not save significant space compared to Veeam compression with reverse incremental since you have to always have enough free space to store at least one pass of uncompressed full backups. The primary use case is for long term archiva; (months) in which case the Windows dedupe can be a huge win.