I have a follow up question, to your below response.
It does just that - reads the entire full backup data from the source repository instead of synthesizing it from files stored on the target one (which requires random I/O + data re-hydration).
Just to make sure, I understood it correctly, synthesizing from files stored on the target one (which requires random I/O + data re-hydration) inst it same as Doing a Synthetic Full Transformation ?
Also, reading from the source will also do the same right, creating a synthetic full which also requires random I/O + data re-hydration
Is one of the prime reason for enabling "reads the entire full backup data from the source repository instead of synthesizing", Is because reading from source would be faster, as they have a shorter backup chain, compared to a GFS which will mostly have a longer backup Chain
As per my understanding both do the same operation of synthetic full, only from which repository is the question. Either Source or Target Repository.
BTW I did read the link you had in your post, that still did not answer this.