-
- Expert
- Posts: 122
- Liked: 29 times
- Joined: Jan 06, 2015 10:03 am
- Full Name: Karl Widmer
- Location: Switzerland
- Contact:
Large Fileserver - Replication recommendations?
Hello together
At a customer we have about 30 VMs on four hosts and a shared NetApp NFS storage. We do a daily backup to disk (reverse incremental mode) on the productive system and an overnight replication to a second site (standalone ESXi server with its own NetApp NFS storage) over a synchronous 50 MBit link. We do also hourly storage snapshots on the productive system to lower the RTPO in case of a VM or user failure. (Backup server: Windows Server 2012 R2).
The backup works fine as expected. The replication works also fine, but with one big issue: large disks (VMDKs).
For example the fileserver of this customer. This VM has two disks (VMDKs). One small disk for operating system (Windows Server 2008 R2) and a big disk for data, which is about 1.2 TB in size.
Everytime when we do have to increase the size of this data disk, the replication takes some days to complete, because of calculation of digests and fingerprints.
I thought that i might split the data on new but smaller disks. For the enduser it makes no difference, a network share is a network share, it doesn't matter on which disks it resides.
So if then a disk has to be increased, the whole calculation of digests and fingerprints will be much faster and the replication don't need days to complete. At least that's my thought.
What do you think? How do you replicate "big" servers?
Thanks for your tipps!
At a customer we have about 30 VMs on four hosts and a shared NetApp NFS storage. We do a daily backup to disk (reverse incremental mode) on the productive system and an overnight replication to a second site (standalone ESXi server with its own NetApp NFS storage) over a synchronous 50 MBit link. We do also hourly storage snapshots on the productive system to lower the RTPO in case of a VM or user failure. (Backup server: Windows Server 2012 R2).
The backup works fine as expected. The replication works also fine, but with one big issue: large disks (VMDKs).
For example the fileserver of this customer. This VM has two disks (VMDKs). One small disk for operating system (Windows Server 2008 R2) and a big disk for data, which is about 1.2 TB in size.
Everytime when we do have to increase the size of this data disk, the replication takes some days to complete, because of calculation of digests and fingerprints.
I thought that i might split the data on new but smaller disks. For the enduser it makes no difference, a network share is a network share, it doesn't matter on which disks it resides.
So if then a disk has to be increased, the whole calculation of digests and fingerprints will be much faster and the replication don't need days to complete. At least that's my thought.
What do you think? How do you replicate "big" servers?
Thanks for your tipps!
Karl Widmer
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
Karl, I'm not sure digests calculation will involve the resized disk only. Need to confirm, but my understanding was that the entire VM is recalculated. How about simply adding a new disk, instead of resizing existing one? This will preserve all existing replica restore points, btw.
-
- Expert
- Posts: 122
- Liked: 29 times
- Joined: Jan 06, 2015 10:03 am
- Full Name: Karl Widmer
- Location: Switzerland
- Contact:
Re: Large Fileserver - Replication recommendations?
Hi foggy,
Yes that is one idea, adding additional disks to the existing disk. Or as i mentioned, migrate all the data and split them up to a bunch of smaller disk. I don't see another way to improve the replicaton.
Thats correct, if any change in disk size is made, the whole disk will be recalculated. I saw that so many times, i can clearly confirm that
Yes that is one idea, adding additional disks to the existing disk. Or as i mentioned, migrate all the data and split them up to a bunch of smaller disk. I don't see another way to improve the replicaton.
Thats correct, if any change in disk size is made, the whole disk will be recalculated. I saw that so many times, i can clearly confirm that
Karl Widmer
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
I meant that it should re-calculate digests for the entire VM, not just the resized disk.
-
- Expert
- Posts: 122
- Liked: 29 times
- Joined: Jan 06, 2015 10:03 am
- Full Name: Karl Widmer
- Location: Switzerland
- Contact:
Re: Large Fileserver - Replication recommendations?
Yes, thats correct. I checked the logs and there you can see in the history that every disk (so the whole VM) is recalculated).
Karl Widmer
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
IT System Engineer
vExpert 2017-2024
VMware VCP-DCV 2023 / VCA6-DCV / VCA5-DCV / VCA5-Cloud / VMUG Leader
Former Veeam Vanguard / VMCE v9 / VMTSP v9 / VMSP v9
Personal blog: https://www.driftar.ch
Twitter: @widmerkarl
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
Then splitting to smaller disks will not help.
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Large Fileserver - Replication recommendations?
Hi Karl,
So how did you manage to replicate the big VM server ?
I'm curious to know
So how did you manage to replicate the big VM server ?
I'm curious to know
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Enthusiast
- Posts: 73
- Liked: 9 times
- Joined: Oct 26, 2016 9:17 am
- Contact:
-
- Influencer
- Posts: 16
- Liked: 2 times
- Joined: Jul 15, 2010 9:26 pm
- Contact:
Re: Large Fileserver - Replication recommendations?
Would be interesting to have some more details from Veeam on how it works.
We have 2 3TB file server we try to replicate to a distant site with w 20Mb/s WAN.
Seeding procedure was not done correctly, and digest had to be recalc... too 6 days per server. One of them is now ok, and replication takes now about 4 hours. For the second one, I'm struggling ... The seeding backup is now old, and I have restarted/change settings) multiple times hoping to get better results.
I'm now trying to get a local backup, and I will use a backup copy to sync the seeding backup at destination... Still... struggling hard.
Would be great to have a best practice taking in account the whole chain: Local Backup - Remote Backup - Remote Replication - Local Backup Archiving - Remote Backup Archiving...
Because Ultimately, that's what should be the ideal architecture. It should contain all of these element. And if you don't understand perfectly how Veeam works, it makes a hell of a difference on performance.
As un example, if your primary storage for backup is a dedup box, well good luck. All post processing tasks will be very slow... And if you want to make backup copies, it will read form the Dedup box... rehydrating data etc....
So far, to understand all this, I had to read many best practices document and other publications. It's a shame. Doesn't make the job easy.
We have 2 3TB file server we try to replicate to a distant site with w 20Mb/s WAN.
Seeding procedure was not done correctly, and digest had to be recalc... too 6 days per server. One of them is now ok, and replication takes now about 4 hours. For the second one, I'm struggling ... The seeding backup is now old, and I have restarted/change settings) multiple times hoping to get better results.
I'm now trying to get a local backup, and I will use a backup copy to sync the seeding backup at destination... Still... struggling hard.
Would be great to have a best practice taking in account the whole chain: Local Backup - Remote Backup - Remote Replication - Local Backup Archiving - Remote Backup Archiving...
Because Ultimately, that's what should be the ideal architecture. It should contain all of these element. And if you don't understand perfectly how Veeam works, it makes a hell of a difference on performance.
As un example, if your primary storage for backup is a dedup box, well good luck. All post processing tasks will be very slow... And if you want to make backup copies, it will read form the Dedup box... rehydrating data etc....
So far, to understand all this, I had to read many best practices document and other publications. It's a shame. Doesn't make the job easy.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
With such a slow link, it is typically recommended to perform seeding by physically moving the locally created backup to remote location, importing it there, and mapping the remote job to it. And, of course, using WAN acceleration is a must.
-
- Novice
- Posts: 3
- Liked: 1 time
- Joined: Nov 09, 2011 7:14 pm
- Full Name: Mike Chmiel
- Contact:
Re: Large Fileserver - Replication recommendations?
This is why I use DFS for file server replication. IMO using native technologies suchs as DFS, SQL Always On, Exchange DAG, etc. are the way to go for large servers.
-
- Influencer
- Posts: 16
- Liked: 2 times
- Joined: Jul 15, 2010 9:26 pm
- Contact:
Re: Large Fileserver - Replication recommendations?
This has been done obviously. Imagine 6TB of data through 20Mb/s WAN ....foggy wrote:With such a slow link, it is typically recommended to perform seeding by physically moving the locally created backup to remote location, importing it there, and mapping the remote job to it. And, of course, using WAN acceleration is a must.
no, the slowness is due to calculating digest. And I don't get why. The job says Target WAN is the bottleneck, but when I check the target WAN accelerator performances stats, they are OK. hard to tell what is really the bottleneck here.
Perhaps changing the block size on the cache partition would help ? any advice ?
-
- Enthusiast
- Posts: 73
- Liked: 9 times
- Joined: Oct 26, 2016 9:17 am
- Contact:
Re: Large Fileserver - Replication recommendations?
from what I understand veeam needs to checksum the blocks to figure out which data is already present at destination, but yes, it is a painfully slow processSuperkikim wrote: no, the slowness is due to calculating digest. And I don't get why.
-
- Influencer
- Posts: 15
- Liked: 5 times
- Joined: May 08, 2015 12:16 am
- Full Name: Paul S
- Contact:
Re: Large Fileserver - Replication recommendations?
I too struggle in this area. I would love to see better veeam procedures on best practices for replicating a large VM that has it's disks resized from time to time. Based on my forum searching, this seems to be an area where many users struggle.
-
- Enthusiast
- Posts: 73
- Liked: 9 times
- Joined: Oct 26, 2016 9:17 am
- Contact:
Re: Large Fileserver - Replication recommendations?
here is what I did in the end : created a big 10 TB VMDK, and for capacity extensions I plan on adding more VMDK and extending the existing volume with the windows LVM (spanned dynamic volume)
from my testing this setup has the advantage of not triggering a checksum
from my testing this setup has the advantage of not triggering a checksum
-
- Veteran
- Posts: 941
- Liked: 53 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Large Fileserver - Replication recommendations?
Nice ide Antipolis,
So this is because you have already allocated the big 10 TB VMDK by itself.
Are you provisioning it as Thin Provision or Thick ?
So this is because you have already allocated the big 10 TB VMDK by itself.
Are you provisioning it as Thin Provision or Thick ?
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Enthusiast
- Posts: 73
- Liked: 9 times
- Joined: Oct 26, 2016 9:17 am
- Contact:
Re: Large Fileserver - Replication recommendations?
actually and in the meantime we had our link between production and DR upgraded to 1 Gbps, bandwidth not being an issue anymore and to keep the configuration simple I just resized the 10 TB vmdk to 20 TB, and recreated my replica from scratch (figured it would be faster than doing the checksum)
production has thin provisioning, but I created it thick for the replica just for the purpose of using direct san mode on the first replication (I quickly realized doing it with NBD would take something like 3 times longer)
in the end it was still a painfully slow operation, so next time I have to extend the volume I guess I will go with my first idea...
production has thin provisioning, but I created it thick for the replica just for the purpose of using direct san mode on the first replication (I quickly realized doing it with NBD would take something like 3 times longer)
in the end it was still a painfully slow operation, so next time I have to extend the volume I guess I will go with my first idea...
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
[MERGED] Resize one disk, why does Veeam replication scan them all?
I have a Windows file server with three disks
The next time my replication job ran it had to calculate digests for all three disks even though only one changed size. It then had to read in the contents of all three disks even though only one changed size.
During the backup job, the only disk it had to read in was the E: drive since it changed. Why would a replication job have to calculate digests and read in contents of disks that did not change size while a backup job does not?
- C: drive which contains the OS (WinServer 2012 R2), 60GB
- E: drive which holds home directories, 2,600GB
- F: drive which holds file shares, 800GB
The next time my replication job ran it had to calculate digests for all three disks even though only one changed size. It then had to read in the contents of all three disks even though only one changed size.
During the backup job, the only disk it had to read in was the E: drive since it changed. Why would a replication job have to calculate digests and read in contents of disks that did not change size while a backup job does not?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Resize one disk, why does Veeam replication scan them al
Are you using the latest product version?
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: Resize one disk, why does Veeam replication scan them al
Yes, we are using Veeam 9.5, update 3
Our hosts are ESXi 6.0 update 3
Our hosts are ESXi 6.0 update 3
-
- Enthusiast
- Posts: 48
- Liked: 3 times
- Joined: Mar 18, 2011 7:36 pm
- Full Name: Sean Conley
Re: Resize one disk, why does Veeam replication scan them al
I experienced this recently as well. Same product versions. Is this due to the fact that it deletes all restore points during the job run? The following was in the log for the run in question: VM disk size changed since last sync, deleting all restore points.
-
- Enthusiast
- Posts: 65
- Liked: 1 time
- Joined: Apr 28, 2012 9:51 pm
- Full Name: Ori Besser
- Contact:
Re: Resize one disk, why does Veeam replication scan them al
Happened to me also with 9.5 u3, is there a fix for that?
-
- Lurker
- Posts: 1
- Liked: never
- Joined: May 21, 2018 3:47 pm
- Full Name: Jake Ruddy
- Contact:
Re: Resize one disk, why does Veeam replication scan them all?
Was there ever an answer to this? I'm having the same exact issue. Just resized the C: drives on Exchange and our file server, added about 50GB to each. Those being the type of servers they are, they also have other disks that are several TB's. Now the replication job is crawling as every singe disk is being recalculated as opposed to just the one that was changed. What gives? This makes resizing a serious pain.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
The behavior is expected. In case of resizing a single source disk, digests need to be recalculated for the entire VM.
-
- VP, Product Management
- Posts: 27375
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Large Fileserver - Replication recommendations?
Just to expand on this a bit more: whenever disk is resized target VM snapshot is no longer valid and needs to be removed. As soon as you remove all snapshots, recalcualtion for the entire VM is required to create a new VM snapshot. So Sconley is spot on with his assumption above.
-
- Veteran
- Posts: 528
- Liked: 104 times
- Joined: Sep 17, 2017 3:20 am
- Full Name: Franc
- Contact:
[MERGED] Smarter digest calculation?
Hi,
why does Veeam calculate the digests in a replication job for all disks when only one the size of one disk was modified? I increased the size of the boot drive from 80 to 100GB, but now it's also calculating the digests for the largest disk which is 2.5TB in size and this will take many hours. Can't this be made a bit smarter so that it skips the other disks since they aren't modified?
Franc.
why does Veeam calculate the digests in a replication job for all disks when only one the size of one disk was modified? I increased the size of the boot drive from 80 to 100GB, but now it's also calculating the digests for the largest disk which is 2.5TB in size and this will take many hours. Can't this be made a bit smarter so that it skips the other disks since they aren't modified?
Franc.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large Fileserver - Replication recommendations?
Hi Franc, please see above for the explanation of this behavior. Thanks!
Who is online
Users browsing this forum: Bing [Bot] and 61 guests