-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
WAN Cache
Hi,
I have deployed a WAN accelerator at each end of a link for backup copy jobs and replication. I have a couple of queries around how i can improve the process and understand the cache usage. the link speed is 50Mbps between live and DR sites.
The WAN accelerator at source is assigned 150GB on SSD (360GB total) I am transferring around 7-8TB total so assumed that the remaining 200GB would be sufficient for digests. I have just checked and the digests are very close to filling the disk, what happens when the disk runs out of space? Does it remove old or inactive digests? should i reduce the 150GB assigned to the cache?
I am using a single server deployment on each end, the servers are a decent spec (2 x8 core, 128GB ram, SSD, direct fibre attached etc, writing to a dedicated MD3820 FC SAN. I have been informed in another thread the WAN accelerator only processes one VM at a time, so the obvious thing would be to dpeloy more WAN accelerators, but the client just doesn't have capacity for this at the moment.
The backup copy jobs using WAN acceleration seem pretty unlikely to all finish within the RPO (24 hours) So the replica jobs are using direct mode for now, which works ok except when the backup copy jobs are running - this slows everything down. The rate of change is quite high 300-450GB per day across the 8 jobs. I have already excluded alot of local volumes used for sql backups or third party apps which i have found to bring it down from an initial 600GB p/d.
Any suggestions on how i can things under control?
Should i try without WAN acceleration on all jobs or limit it to just the largest rate of change jobs (mainly SQL - 250GB)
Appreciate any help you can give me.
Singy
I have deployed a WAN accelerator at each end of a link for backup copy jobs and replication. I have a couple of queries around how i can improve the process and understand the cache usage. the link speed is 50Mbps between live and DR sites.
The WAN accelerator at source is assigned 150GB on SSD (360GB total) I am transferring around 7-8TB total so assumed that the remaining 200GB would be sufficient for digests. I have just checked and the digests are very close to filling the disk, what happens when the disk runs out of space? Does it remove old or inactive digests? should i reduce the 150GB assigned to the cache?
I am using a single server deployment on each end, the servers are a decent spec (2 x8 core, 128GB ram, SSD, direct fibre attached etc, writing to a dedicated MD3820 FC SAN. I have been informed in another thread the WAN accelerator only processes one VM at a time, so the obvious thing would be to dpeloy more WAN accelerators, but the client just doesn't have capacity for this at the moment.
The backup copy jobs using WAN acceleration seem pretty unlikely to all finish within the RPO (24 hours) So the replica jobs are using direct mode for now, which works ok except when the backup copy jobs are running - this slows everything down. The rate of change is quite high 300-450GB per day across the 8 jobs. I have already excluded alot of local volumes used for sql backups or third party apps which i have found to bring it down from an initial 600GB p/d.
Any suggestions on how i can things under control?
Should i try without WAN acceleration on all jobs or limit it to just the largest rate of change jobs (mainly SQL - 250GB)
Appreciate any help you can give me.
Singy
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: WAN Cache
Hi Andrew,
By the way, if you are a Cloud Provider, you can join dedicated Veeam Cloud Providers subforum. Please apply to the corresponding group in the User Control panel to ensure you can follow it.
Thanks!
You are correct, older digest will be reused by the most relevant one, so no need to reduce the assigned 150GB.singy2002 wrote:what happens when the disk runs out of space? Does it remove old or inactive digests? should i reduce the 150GB assigned to the cache?
So, the problem is you have too many jobs for one WAN acceleration to handle them with any scheduling? If yes, there are 2 ways as you mentioned, either add another acceleration pair or use direct mode for the least critical jobs.singy2002 wrote:Any suggestions on how i can things under control?
Should i try without WAN acceleration on all jobs or limit it to just the largest rate of change jobs (mainly SQL - 250GB)
By the way, if you are a Cloud Provider, you can join dedicated Veeam Cloud Providers subforum. Please apply to the corresponding group in the User Control panel to ensure you can follow it.
Thanks!
-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
Re: WAN Cache
Thanks for the info on the digests.
Couple of follow up questions:
Is the number of restore points in a backup copy job - does this need to be the number of retention points of recent backups + the number of historical backups OR just the number of recent backups.
Example
10 restore points (10 last backups + 4 weeklys + 12 monthlys + 2 yearlies
=28
OR
10 restore points
=10
On restore from backup copies - how much of the file is read? How feasable is it to restore over the WAN? I have a large backup files (around 1TB) and was worried about it actually having to mount.
My Proxy/repository at DR - (can/should i install Veeam backup and recovery on this side so i can import backups locally or is there an adverse affect to this?
Singy
Couple of follow up questions:
Is the number of restore points in a backup copy job - does this need to be the number of retention points of recent backups + the number of historical backups OR just the number of recent backups.
Example
10 restore points (10 last backups + 4 weeklys + 12 monthlys + 2 yearlies
=28
OR
10 restore points
=10
On restore from backup copies - how much of the file is read? How feasable is it to restore over the WAN? I have a large backup files (around 1TB) and was worried about it actually having to mount.
My Proxy/repository at DR - (can/should i install Veeam backup and recovery on this side so i can import backups locally or is there an adverse affect to this?
Singy
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: WAN Cache
"Restore points to keep" number counts only recent backup, no GFS retention ones. So it equals 10 in your example.
Deploying VBR onsite will not change restore speed. I`d suggest to restore from local repositories instead. Thanks!
Restore through WAN is feasible, but keep in mind that WAN acceleration doesn`t work for restore. Only chosen files/VMs are to be restored.singy2002 wrote:On restore from backup copies - how much of the file is read? How feasable is it to restore over the WAN?
Deploying VBR onsite will not change restore speed. I`d suggest to restore from local repositories instead. Thanks!
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Cache
Andrew, what are the bottleneck stats for you copy jobs?
-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
Re: WAN Cache
Installing a full copy of VBR on the DR site will mean i can import the backup files and mount them "locally" to the backup file?Shestakov wrote: Deploying VBR onsite will not change restore speed. I`d suggest to restore from local repositories instead. Thanks!
Couple of examples.foggy wrote:Andrew, what are the bottleneck stats for you copy jobs?
05/05/2015 11:14:12 :: Load: Source 46% > Source WAN 50% > Network 3% > Target WAN 71% > Target 10%
This was a 35 hours run on a large backup copy
05/05/2015 11:14:15 :: Load: Source 45% > Source WAN 51% > Network 15% > Target WAN 80% > Target 15%
This is another long job (43 hours) Last 7 hours was "waiting for new interval"
Edit: the data not being copied over the network is undoubtedly great 1.1TB processed / 421GB read and ONLY 6.1GB transfered however the cost of this with regard to the amount of time is enormous.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Cache
Right, you can set up a separate Veeam B&R instance just for the case of restores. However I assume Nikita was recommending restores from local repositories instead of restores from backups copied over WAN to remote location.singy2002 wrote:Installing a full copy of VBR on the DR site will mean i can import the backup files and mount them "locally" to the backup file?
This is what WAN accelerators were designed for - to trade disk I/O for less WAN bandwidth usage. I recommend to staff target WAN accelerator with SSD disks, since it is your main bottleneck.singy2002 wrote:Edit: the data not being copied over the network is undoubtedly great 1.1TB processed / 421GB read and ONLY 6.1GB transfered however the cost of this with regard to the amount of time is enormous.
-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
Re: WAN Cache
The WAN accelerator disks are SSD, unfortunately its just not getting through due to the amount of change this customer has
-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
Re: WAN Cache
My digest just filled up so the backup copy job is now stalled...
Is there a way to clear digests manually or kick it off on the server.
Drive info.
350GB SSD Disk
150GB assigned to cache
200GB for digests (now full)
Total amount of replicated data is around 7 TBs.
Is there a way to clear digests manually or kick it off on the server.
Drive info.
350GB SSD Disk
150GB assigned to cache
200GB for digests (now full)
Total amount of replicated data is around 7 TBs.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Cache
Digests require 20 GB of free space per 1 TB of source VM data, so check whether 200 GB is enough in your case. You can delete them manually, however expect them to be re-calculated during the next cycle.
-
- Service Provider
- Posts: 66
- Liked: 1 time
- Joined: Mar 22, 2010 8:43 am
- Full Name: Andrew Singleton
- Contact:
Re: WAN Cache
Thanks Foggy, I thought it was, perhaps it is not, although some jobs have been recreated so potentially thee are duplicates or old jobs which are wasting digest space.
Are the digests high access files? If not, would it be an option to send digests to other storage than the expensive SSD.
Are the digests high access files? If not, would it be an option to send digests to other storage than the expensive SSD.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN Cache
Actually, the only sense in using SSD disks on source is for digests (source WAN does not use cache). Using SSD for digests should give some performance gain, though.
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], mikeely and 61 guests