Hi,
I would like to know how the data is traveling when we do a Backup copy from one DXi to another DXi when we have the Veeam Data Mover Service installed on both Quantum?
Is the data traveling direct from one VDMS to the other VDMS or does it have to pass trough a Veeam proxy server in the middle (Quantum VDMS --> Proxy --> Quantum VDMS)
Thank you
-
- Service Provider
- Posts: 272
- Liked: 32 times
- Joined: Feb 05, 2016 8:07 pm
- Contact:
-
- Product Manager
- Posts: 14726
- Liked: 1706 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Quantum DXi with VDMS
Hello andre.simard.
Backup copy jobs go from the source repository datamover to the target repository data mover. These run on the repository itself, or - in case of shared folder - on the "gateway server" selected in the backup repository settings.
Cheers!
Backup copy jobs go from the source repository datamover to the target repository data mover. These run on the repository itself, or - in case of shared folder - on the "gateway server" selected in the backup repository settings.
Cheers!
-
- VP, Product Management
- Posts: 7081
- Liked: 1511 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: Quantum DXi with VDMS
hi Andre,
thanks for the question. In addition to the answer from Dima, let me add that in this situation the data is rehydrated from the quantum dedup engine, you need to calculate with it.
Best practices is to write primary backups to a fast non dedup storages (maybe just a few restore points) and then use backup copy jobs to write into both quantum DXis.
If you keep the setup as is, I recommend enabling compression in the backup copy job and activate uncompress feature on the repository level. This will help to optimize network usage.
thanks for the question. In addition to the answer from Dima, let me add that in this situation the data is rehydrated from the quantum dedup engine, you need to calculate with it.
Best practices is to write primary backups to a fast non dedup storages (maybe just a few restore points) and then use backup copy jobs to write into both quantum DXis.
If you keep the setup as is, I recommend enabling compression in the backup copy job and activate uncompress feature on the repository level. This will help to optimize network usage.
Who is online
Users browsing this forum: Google [Bot], Gostev, Semrush [Bot] and 69 guests