-
- Enthusiast
- Posts: 45
- Liked: 6 times
- Joined: Dec 27, 2012 12:25 pm
- Contact:
WAN acceleration with heavy transactional data (AD/EXCH/SQL)
The 50x observed reduction in data transferred from WAN acceleration is quite an impressive number, but i'm wondering if there has been any testing done specifically with heavy transaction data changes like from AD, Exchange, and SQL. What kind of reduction are you seeing with this type of data? It seems like there would be minimal data reduction out of these types of systems.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: WAN acceleration with heavy transactional data (AD/EXCH/
Hello, your thoughts regarding highly transactional applications data are generally correct, reduction for those will not be that impressive. However, WAN acceleration algorithm implemented in v7 will still provide significant improvement for those as well. The fact is that without this feature Veeam B&R uses large blocks to transfer data (1MB by default, could be decreased to 512KB or 256KB in the job settings), while SQL servers generally use small blocks and perform very small unique changes across the whole VM disk. With the current Veeam B&R v6.5, the whole 1MB block is transferred despite the fact that only, say, 4KB are changed within it. WAN acceleration mechanism will allow to "extract" and transfer the effectively changed bytes only thus introducing significant reduction in size of transferred increments for these applications as well.
P.S. Btw, AD could hardly be considered a highly transactional application.
P.S. Btw, AD could hardly be considered a highly transactional application.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: WAN acceleration with heavy transactional data (AD/EXCH/
Yes, I actually think transactional data is one of the cases where WAN acceleration will play a huge role, but you probably won't see the 50x numbers from those cases. However, just as Alexander properly pointed out, the great thing about the WAN acceleration feature is it allows only the very smallest change data to be sent. For example, even if you configure Veeam to use the smallest blocks size (WAN Optiomization - 256K) pretty much all databases operate on much smaller blocks, 8K, 32K and 64K are common, but even then they may change only a very small about of the data in a block (for example if a row is updated or a mail is marked a read).
Not only that, but these loads generally have a lot of duplicate data as well. Rows in databases have a tendency to have very similar content, in Exchange 2010 and later there's no longer single instance store so there can be huge savings there. In my testing SQL databases are seeing around 10-20x savings over the normal Veeam change rate. For example, a SQL server that creates a 4GB incremental after compression and dedupe, which without WAN acceleration would normally need to send that 4GB across the wire, only needed 210MB to send those changes. That's pretty a pretty impressive reduction (19.5x) so while perhaps not 50x, certainly still quite good and well worth it.
I'm personally expecting this will be one of the biggest improvements for the technology as, even if the ratios are not 50x, there's typically far more savings to be had from transactional workloads since they have a tendency to be the bulk of the change. For example, if I get 50x savings on a file server that generates 5GB of change, that's great, I saved 4.9GB, a good savings, however, if I get 10x savings on an Exchange server that generates 30GB of daily changes, that means I'll save 27GB of data, which means I'll save more data on the Exchange server than the other server used altogether. I'll be surprised if there are any scenarios where the savings are not at least 10x.
Not only that, but these loads generally have a lot of duplicate data as well. Rows in databases have a tendency to have very similar content, in Exchange 2010 and later there's no longer single instance store so there can be huge savings there. In my testing SQL databases are seeing around 10-20x savings over the normal Veeam change rate. For example, a SQL server that creates a 4GB incremental after compression and dedupe, which without WAN acceleration would normally need to send that 4GB across the wire, only needed 210MB to send those changes. That's pretty a pretty impressive reduction (19.5x) so while perhaps not 50x, certainly still quite good and well worth it.
I'm personally expecting this will be one of the biggest improvements for the technology as, even if the ratios are not 50x, there's typically far more savings to be had from transactional workloads since they have a tendency to be the bulk of the change. For example, if I get 50x savings on a file server that generates 5GB of change, that's great, I saved 4.9GB, a good savings, however, if I get 10x savings on an Exchange server that generates 30GB of daily changes, that means I'll save 27GB of data, which means I'll save more data on the Exchange server than the other server used altogether. I'll be surprised if there are any scenarios where the savings are not at least 10x.
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: WAN acceleration with heavy transactional data (AD/EXCH/
Regarding 50x claim, this was estimated with expectations that an average Veeam customer will be copying at least 10-20 VMs as a part of a Backup Copy job, and using reasonably sized global cache size. Of course, you are unlikely to see 50x while testing Backup Copy on a few VMs - however, the more VMs you are copying, the bigger the traffic savings you will be seeing. Big customers who will be copying large amounts of VMs will be observing even better traffic savings, so 50x is actually a conservative estimate for a typical workload.
But, one should never assume to get any ratios, because certain workloads can be pretty unbearable for WAN Acceleration (for example, hospital file servers holding X-Ray JPG images). Best way to plan is still to run a test job on a subset of your workload, and approximate the bandwidth requirements based on the results.
But, one should never assume to get any ratios, because certain workloads can be pretty unbearable for WAN Acceleration (for example, hospital file servers holding X-Ray JPG images). Best way to plan is still to run a test job on a subset of your workload, and approximate the bandwidth requirements based on the results.
Who is online
Users browsing this forum: Baidu [Spider], Google [Bot] and 133 guests