thanks for the question. In addition to the answer from Dima, let me add that in this situation the data is rehydrated from the quantum dedup engine, you need to calculate with it.
Best practices is to write primary backups to a fast non dedup storages (maybe just a few restore points) and then use backup copy jobs to write into both quantum DXis.
If you keep the setup as is, I recommend enabling compression in the backup copy job and activate uncompress feature on the repository level. This will help to optimize network usage.