My knowledge of Veeam is old (Version 5)
We are involved in a design that will be:
40TB of source Data (typical general office environment)
15 ESX Servers (all 5.0)
160 VMs (80% are W2K8 R2)
Daily rates of change are 'small'
eg Backup Exec would typically see 2TB of file change
But at a block level, the rates of change are much smaller (eg 600GB)
Most of the 40TB of data is 'old and cold'
Backup Window is a Full 12 hours per day - and we have all of the weekend for backup as well
WAN Backup would be 30Mbps during the day, and 55Mbps during 12 hours of the evening and wide open on the weekend.
Maximum of 60 Mbps of WAN bandwidth for backup copy to a remote/second site.
Requirements in order are
1) 60-90 days of retention of data at the remote/second site
2) 28 days of retention at the primary site
We have no requirement to DeDupe data at the local site (and of course there is no DeDupe at the remote site)
We have 80TB of physical storage for use at both the local and remote site.
What we do have a critical need for is 'incredible'
* WAN acceleration
* Optimised WAN caching
We would like to avoid the 'expense' of a Dedicated DeDupe box (ie we do not need dedupe for 'space', only "dedupe" for WAN optimisation).
We have a good budget for physical servers - and can apply reuse up to 6 servers with Dual CPUs (some have 32GB RAM, some have 64GB) to run Veeam at the source site, and remote site (Left over from our P2V)
What advice guidance could be offered around V7 to potentially meet my needs
(I fully understand there is not enough information here - thus the guidance would be taken as guidance to us attempting to test this environment)
eg
1) Thoughts around Global Cache
2) Speed of that Cache eg do I need very fast SAS Disks - what type of IOPs are suggested
3) Would there be benefits in this cache being SLC SSD (we have numerous Intel X25e's left over from an old project)
4) Can the cache be 'too big' - ie are there diminishing returns have a huge TB cache
5) How long would it take to process / create / transmit the WAN cache etc
Assume the first initial seed has been completed - and assume we have had our first sync
-
- Service Provider
- Posts: 2
- Liked: never
- Joined: May 24, 2013 2:46 pm
- Full Name: CDDH
- Location: United Kingdom
- Contact:
-
- Chief Product Officer
- Posts: 31807
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: WAN Acceleration
Hello
1. We recommend at least 100 GB, but of course the more, the better.
2. Depends on your WAN link speed. If we are talking 50-100 mpbs, then go for the fastest you can get. If 10-20 mpbs, than does not really matter.
3. SSD is the ultimate storage for global cache, I definitely recommend using that... target WAN accelerator's cache is typically the primary bottleneck.
4. There is diminishing return on adding more cache after about 50-100GB.
5. Depends on the cache size, and total size of seeded backups in the target site... can take hours.
Thanks!
1. We recommend at least 100 GB, but of course the more, the better.
2. Depends on your WAN link speed. If we are talking 50-100 mpbs, then go for the fastest you can get. If 10-20 mpbs, than does not really matter.
3. SSD is the ultimate storage for global cache, I definitely recommend using that... target WAN accelerator's cache is typically the primary bottleneck.
4. There is diminishing return on adding more cache after about 50-100GB.
5. Depends on the cache size, and total size of seeded backups in the target site... can take hours.
Thanks!
-
- Service Provider
- Posts: 2
- Liked: never
- Joined: May 24, 2013 2:46 pm
- Full Name: CDDH
- Location: United Kingdom
- Contact:
Re: WAN Acceleration
Gostev
Thanks for the response
*In terms of an SSD; could we get away with consumer Grade MLC in a RAID 1 configuration, with say 5000 write cycle lifespans (typically about 80TB of overall write) (not the cheap 1000 write Cycle TLC)
***That is: beyond the first population of say a 100GB remote cache, is the remote cache being written too 'massively' after this.
**** eg I see it being more of a reference library with the need for constant read, and occasional write
If critical, we could certainly use the SLC drives we have (with 100,000 write cycles capable of Petabytes of total write)
Another question - and a very rough/gut feel;
How long would it take to process a nightly WAN Accelerated copy of say 600GB of daily delta change - assuming we used SSDs at both the local and remote WAN caches.
Assume latency of say 20-30 msec across the WAN.
Would I be right in assuming the 'time to process' is not specifically a function of WAN speed, but the constant and repetitive look up of backed up 'blocks' against both the local and the remote WAN Accelerator caches.
If it is performing a remote WAN cache look up - does it do this block by block - That is--for each an every remote block compare, does it have to do a 30msec across the WAN trip (per WAN Accelerator)
Does it perform any clever aggregation / so that it is not comparing a single 'lookup' at a time, but potentially aggregating multiple lookups into a single 'across the wan 30msec' trip.
Thanks for the response
*In terms of an SSD; could we get away with consumer Grade MLC in a RAID 1 configuration, with say 5000 write cycle lifespans (typically about 80TB of overall write) (not the cheap 1000 write Cycle TLC)
***That is: beyond the first population of say a 100GB remote cache, is the remote cache being written too 'massively' after this.
**** eg I see it being more of a reference library with the need for constant read, and occasional write
If critical, we could certainly use the SLC drives we have (with 100,000 write cycles capable of Petabytes of total write)
Another question - and a very rough/gut feel;
How long would it take to process a nightly WAN Accelerated copy of say 600GB of daily delta change - assuming we used SSDs at both the local and remote WAN caches.
Assume latency of say 20-30 msec across the WAN.
Would I be right in assuming the 'time to process' is not specifically a function of WAN speed, but the constant and repetitive look up of backed up 'blocks' against both the local and the remote WAN Accelerator caches.
If it is performing a remote WAN cache look up - does it do this block by block - That is--for each an every remote block compare, does it have to do a 30msec across the WAN trip (per WAN Accelerator)
Does it perform any clever aggregation / so that it is not comparing a single 'lookup' at a time, but potentially aggregating multiple lookups into a single 'across the wan 30msec' trip.
-
- Chief Product Officer
- Posts: 31807
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: WAN Acceleration
Sure some blocks will be coming and going from cache, but I don't expect massive writes. Nevertheless, IMO last thing you want to try and save on is reliability of your backups. Thus, I personally would go with SLC drives, because bit rot in global cache drives is bad... this will be causing resyncs of uncompressed data blocks as a part of self-healing process, which will be adding up to bandwidth consumption and job run time.
The time to sync will solely depend on your WAN bandwidth speed. That is, assuming you WAN is under 100 mbps, you will be using SSD for global cache, and your backup storage has reasonable performance.
Ballbark, you can divide your daily incremental backup size by at 10, and calculate how much time it takes to transfer this amount of data with your current bandwidth. People often get much better traffic reduction, but usually it is at least 10 times.
Thanks!
The time to sync will solely depend on your WAN bandwidth speed. That is, assuming you WAN is under 100 mbps, you will be using SSD for global cache, and your backup storage has reasonable performance.
Ballbark, you can divide your daily incremental backup size by at 10, and calculate how much time it takes to transfer this amount of data with your current bandwidth. People often get much better traffic reduction, but usually it is at least 10 times.
Thanks!
Who is online
Users browsing this forum: No registered users and 59 guests