I'll try to explain it from my perspective. Gostev is part of product development (specifically I believe his title is currently Senior VP of Product Management), while Luca and I work on the field with customers and their deployments (I'm Principal Solutions Architect in NA, and I believe Luca's title is Cloud and Hosting Architect in EMEA). We all work together closely, but I suppose the best way to say it might be that Gostev, and likely all of us in the early days, provide practices that are best in theory, because there is not enough field practice to provide anything else, but, over time, the field proves what practices are actually best, and sometimes this is different based on various technological issues that were unforeseen.antipolis wrote:ok I'm really confused here... I've read a lot of things on veeam /w refs, watched the videos from Gostev, and I might have missed it but I don't recall seeing anywhere that the new best practice is just stop doing synthetic backups on refs...
anyway synthetics or not doesnt explain the performances issues many are seeing, disabling synthetics on regular backups won't solve the issue merge operations or backup copy job GFS
The Veeam best practices document is created by Veeam architects that work in the field, like Luca and myself (and many others), because we observe what things work for customers and what things cause problems, thus the recommendations/suggestions you see here may actually be based on changes we are planning to make in the best practices going forward, or may be based on a specific situation that exist now (like the current issues with ReFS) that we hope will go away in the future, but we want customers to have the best experience now. This is not to say the Gostev and product development do not also see field issues, they of course do through support cases and other avenues (including speaking with field resources like myself and Luca and others), but from that perspective they're focus is mostly regarding fixing the issues so that hopefully theory and practice get closer together, while we, as field resources, work are to provide recommendations that will be successful even with any current technical limitaitons.
And that's exactly where this specific recommendation is coming from. Sure, in theory, using synthetic fulls on ReFS via block clone should not cause any problems. However, in practice, we've seen that having lots of cloned files with many files referencing the same blocks, has many more negative impacts than expected, one of the more common being when those files have to be deleted. Microsoft continues to work on these issues, so hopefully, at some point in the future, theory will get closer to practice, but for now, those of us in the field are saying is that, based on field experience, customers that run synthetics have significantly more performance and stability issues than those that do not so, if you are running synthetics in cases where they offer minimal benefit, it's probably best to not use them.
The reason for this is actually pretty obvious if you think about it. Most of the performance and stability issues around ReFS seem to be with how ReFS accounts for cloned blocks. If I have 45 days of retention with synthetic fulls, that mean I have 6 VBK fules that have blocks referenced with anywhere from 1-6 references in every one of those files. If my full backups are 20TB, that's 20TB worth of blocks that have to have their metadata updated. On the other hand, if I just have 45 days of forever forward backups, the only time there is ever any duplicate blocks is during the merge, and, as soon as the merge is complete, the file with the duplicate blocks (the oldest incremental) is deleted. Since this is not a full backup, it's much smaller, so far fewer blocks need metadata updated during the delete, and the update operation is overall simpler since the maximum times a block will be references is 2, and then immediately back down to 1 again.
In other words, when running synthetics on ReFS, there's a massive increase in the amount of accounting work that ReFS has to do compared to running forever incremental and, in practice, it does impact both the performance and stability of the solution, even if theory says it should not.
Of course you are correct to point out that there are cases, like Backup Copy with GFS, where synthetics are the only option, that's certainly true. We're hopeful that Microsoft eventually gets to the bottom of those specific issues, in which case it probably won't matter anymore more. But for now, we're simply saying that, based on the results of hundreds of customer deployments, and the current status of ReFS, running synthetic fulls in cases where they are not needed and offers little to no benefit is likely to cause more problems than using and forever mode.