-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
why Calculating digests process will e take many hours due to replication job
When I run replication job between two site follow process will be take more than 3 hours
Calculating digests for Hard disk 1 (100.0 GB) 27% completed
I am use seeding and mapping and my source repository is datadomain that all backuos reside on it also I have set my replica metadata on destination site.
1- What is that process (Calculating digests for Hard disk) ?
2- Is that necessary to run ?
3- How can solve that issue ?
Calculating digests for Hard disk 1 (100.0 GB) 27% completed
I am use seeding and mapping and my source repository is datadomain that all backuos reside on it also I have set my replica metadata on destination site.
1- What is that process (Calculating digests for Hard disk) ?
2- Is that necessary to run ?
3- How can solve that issue ?
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Hello,
depending on your hardware and bandwidth it seems to be expected. I just read datadomain as source... inline deduplication appliances are known to be slow for read (backup is okay, restore not).
https://helpcenter.veeam.com/docs/backu ... ml?ver=100 - just to be 100% sure: the DataDomain is the "Backup Repository in DR site" in your case?
You mention that you set replica metadata to the destination side. The wizard asks you to choose it at the source side https://helpcenter.veeam.com/docs/backu ... ml?ver=100 (text in the header)
As you probably will not replace your DD, I would start with setting the metadata repository to the source and then just wait.
yes, that process is required.
Best regards,
Hannes
depending on your hardware and bandwidth it seems to be expected. I just read datadomain as source... inline deduplication appliances are known to be slow for read (backup is okay, restore not).
https://helpcenter.veeam.com/docs/backu ... ml?ver=100 - just to be 100% sure: the DataDomain is the "Backup Repository in DR site" in your case?
You mention that you set replica metadata to the destination side. The wizard asks you to choose it at the source side https://helpcenter.veeam.com/docs/backu ... ml?ver=100 (text in the header)
As you probably will not replace your DD, I would start with setting the metadata repository to the source and then just wait.
yes, that process is required.
Best regards,
Hannes
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
1- As I understood just had to change the repository for replica metadata from destination site to source site and as my respository in source site is Datadomain cannot save replica metadat on datasomain and had to select on the other source such as veeam servers
2- the other problem that I see as I copy vbk file from site A to siteB (on server2 that reside on siteB) as seed and then ass server2 to veeam server as respository then due to replication job select this repository as seed but when run the job show Task failed. Error: VM 'AM' not found in backup for initial sync .
also after add repository click scan on that and the output was such as follow pic in this repository there are more than 10 VBK but in follow show all parameters are 0
is that ok?
https://pasteboard.co/JePAtve.jpg
Top
2- the other problem that I see as I copy vbk file from site A to siteB (on server2 that reside on siteB) as seed and then ass server2 to veeam server as respository then due to replication job select this repository as seed but when run the job show Task failed. Error: VM 'AM' not found in backup for initial sync .
also after add repository click scan on that and the output was such as follow pic in this repository there are more than 10 VBK but in follow show all parameters are 0
is that ok?
https://pasteboard.co/JePAtve.jpg
Top
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
1. ah okay, so you have a normal (Windows) server at the DR site? that's good! Yes, the Veeam Server at the source is also okay as metadata repository.
2. Sorry, I don't understand what you do / ask. I suggest to follow the user guide.
Best regards,
Hannes
2. Sorry, I don't understand what you do / ask. I suggest to follow the user guide.
Best regards,
Hannes
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
about second question :
I want use seeding due replication between two site thus. get backup from specific vms in site1 and copy just vbk files to site2 now add that server that vbk files reside it to the veeam server as repository now and click rescan and show follow pic after rescan repository is that ok ?
according to pic all parameters are 0 such as 0 added, 0 update , 0 removed , ... whereas there are more than 5 vbk file in this repository is that ok ?
https://pasteboard.co/JePAtve.jpg
now in replication job I select that repository on DR site as seed but when job start show follow error :
https://pasteboard.co/JeQnzqi.jpg
I want use seeding due replication between two site thus. get backup from specific vms in site1 and copy just vbk files to site2 now add that server that vbk files reside it to the veeam server as repository now and click rescan and show follow pic after rescan repository is that ok ?
according to pic all parameters are 0 such as 0 added, 0 update , 0 removed , ... whereas there are more than 5 vbk file in this repository is that ok ?
https://pasteboard.co/JePAtve.jpg
now in replication job I select that repository on DR site as seed but when job start show follow error :
https://pasteboard.co/JeQnzqi.jpg
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
You need to copy VBM files along with VBK. Rescan is not possible without VBM, which stores backup metadata.
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Thanks. But now I copy two vbm files (as vbk files are related two different backup hobs) but another show failed such as follow pic can not detect backups Actually I have 11 VBK file with 2 vbm file
https://pasteboard.co/JeQKsiV.jpg
https://pasteboard.co/JeQKsiV.jpg
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Are they located in the same folder as the corresponding backups?
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Please see follow pic. all vbk and vbm file are in the same folder in the server in siteB
https://pasteboard.co/JeR1zMs.jpg
https://pasteboard.co/JeR1zMs.jpg
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
You're using per-VM backup chains in your original repository, right? In this case, each VM has its own subfolder where backup chain is stored, please try to preserve the same structure on the target repository.
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Yes. in the original repository it is selected :
Decompress backup data block before storing and
Use per vm backup files
also in my destination repository on DR site both of these items select and another due to rescan datastore show previous error
Decompress backup data block before storing and
Use per vm backup files
also in my destination repository on DR site both of these items select and another due to rescan datastore show previous error
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
If backups and VBM files preserve the original folder structure and rescan still fails, feel free to contact support for remote assistance - that will be more effective than a forum ping-pong.
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
I will open a case but before that :
My repository on main site is datadomain and follow items has been selected on that :
Decompress backup data block before storing and
Use per vm backup files
my Respository on DR site is veeam backup server and I just share a folder on it and copy all vbk and vbm file that need for use seeding in replica job
Is this correct or had to do other things?
My repository on main site is datadomain and follow items has been selected on that :
Decompress backup data block before storing and
Use per vm backup files
my Respository on DR site is veeam backup server and I just share a folder on it and copy all vbk and vbm file that need for use seeding in replica job
Is this correct or had to do other things?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Yes, basically it should work like it is outlined here.
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
Is that important first add DR repository to veeam in main site and then copy vbk and vbm file or not or first create repository and copy vbk and vbm file and then add to veean in main site ?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
The order doesn't matter here.
-
- Enthusiast
- Posts: 46
- Liked: never
- Joined: Jan 05, 2020 6:14 am
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
This is so strange , I had extra folder in repository folder and just remove that folder and anothe rescan repository according to follow pic not show any error but just show 2 skipped
https://pasteboard.co/JeSLm51.jpg
Is thi OK?
https://pasteboard.co/JeSLm51.jpg
Is thi OK?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
You can try to seed the replica - chances are the backups are already imported. Please contact technical support for further assistance.
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Apr 25, 2018 11:10 am
- Full Name: Jannie Hanekom
- Contact:
Re: why Calculating digests process will e take many hours due to replication job
I'm going to stick to the digests question; I'm not Veeam expert and that's the only one I can share any experience on.
Just in case the real question is: Why does the digest calculation need to run *every time* the job runs: it shouldn't need to.
My particular story involved WAN accelerator, which may mean it's not applicable to you, but I'll share it anyway. I had set up replica (and copy) jobs and was very confused by the "calculating digests" process that ran (almost) every time the replica (or copy) job ran.
Turns out it was as simple as me underestimating the size of the digests, filling up the disk where digests are stored (C: drive by default, IIRC), causing the oldest digests to be deleted, for the process to start again from scratch at the next replica interval. If there is sufficient disk space, the oldest digests don't get deleted, and the next replica interval doesn't have to calculate them again. With WAN accelerator, digests used 5% of the total volume of data to be replicated. What I missed is that replicas and copies had separate sets of digests. As I was replicating about 20TB, I needed 2TB (10%) for digests, and I was only making provision for 1TB (5%.) Again, this may not be appropriate for your particular environment.
With that said, here's my suggestion: Check on the size of the digests folder, make sure you have enough free disk space available. If you're not keen on making the C: drive bigger, consider moving the digests folder to a different drive or partition with more space. (for WAN accelerator, I had to move the Global Cache location.)
(For what it's worth, in the end we ended up doing daily backup copy jobs, and set up daily replica jobs using the already-copied jobs as source, to avoid sending data over the wire twice.)
Just in case the real question is: Why does the digest calculation need to run *every time* the job runs: it shouldn't need to.
My particular story involved WAN accelerator, which may mean it's not applicable to you, but I'll share it anyway. I had set up replica (and copy) jobs and was very confused by the "calculating digests" process that ran (almost) every time the replica (or copy) job ran.
Turns out it was as simple as me underestimating the size of the digests, filling up the disk where digests are stored (C: drive by default, IIRC), causing the oldest digests to be deleted, for the process to start again from scratch at the next replica interval. If there is sufficient disk space, the oldest digests don't get deleted, and the next replica interval doesn't have to calculate them again. With WAN accelerator, digests used 5% of the total volume of data to be replicated. What I missed is that replicas and copies had separate sets of digests. As I was replicating about 20TB, I needed 2TB (10%) for digests, and I was only making provision for 1TB (5%.) Again, this may not be appropriate for your particular environment.
With that said, here's my suggestion: Check on the size of the digests folder, make sure you have enough free disk space available. If you're not keen on making the C: drive bigger, consider moving the digests folder to a different drive or partition with more space. (for WAN accelerator, I had to move the Global Cache location.)
(For what it's worth, in the end we ended up doing daily backup copy jobs, and set up daily replica jobs using the already-copied jobs as source, to avoid sending data over the wire twice.)
Who is online
Users browsing this forum: Kazz, lohelle, Semrush [Bot] and 73 guests