-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Best replication compression ESX 3.5 > ESX 4 enviornment
Hi all,
I m after a bit of advise, we are running a clients replication from an ESX 3.5 environment (via Virtual centre) to an ESX 4 host. The replication is split into two jobs, with one job having a large VM (around 600GB in size) major current issue is the amount of data that is being replicated on subsequent incremental replications (set to occur every evening) looking at our network monitor it looks to be replicating around 80 GB per day! which is ok as we are current doing local replication, once we move remotely this will be an issue.
The options we have explored are as follows;
- replication using the vstorage API (network mode with encyrption) - SAN is out as it isnt on the fabric - this was the largest in terms of data volumes replicated in incremental jobs
- replication using network (service console) with best compression turned on - this is better but still very large in incremental replication jobs
The obvious answer at the source end is to upgrade to ESX 4 and convert the vms to version 7 which i assume will greatly reduce the incremental replication size, however in the interim is there a better way to do it other than the two options above?
thanks
Dan
I m after a bit of advise, we are running a clients replication from an ESX 3.5 environment (via Virtual centre) to an ESX 4 host. The replication is split into two jobs, with one job having a large VM (around 600GB in size) major current issue is the amount of data that is being replicated on subsequent incremental replications (set to occur every evening) looking at our network monitor it looks to be replicating around 80 GB per day! which is ok as we are current doing local replication, once we move remotely this will be an issue.
The options we have explored are as follows;
- replication using the vstorage API (network mode with encyrption) - SAN is out as it isnt on the fabric - this was the largest in terms of data volumes replicated in incremental jobs
- replication using network (service console) with best compression turned on - this is better but still very large in incremental replication jobs
The obvious answer at the source end is to upgrade to ESX 4 and convert the vms to version 7 which i assume will greatly reduce the incremental replication size, however in the interim is there a better way to do it other than the two options above?
thanks
Dan
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Hi Dan, what application does this VM run? It sounds like it has a lot of virtual disk changes everyday, which is why the replication traffic is so heavy.
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Hi Gostev,
Its a file server, the company is transactional based so i m assuming they are updating a lot of documents regularly
Thanks
Dan
Its a file server, the company is transactional based so i m assuming they are updating a lot of documents regularly
Thanks
Dan
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Actually, changing the source end to ESX 4 probably wouldn't reduce the incremental size at all. Changing to ESX 4 allows Veeam to know what data needs to be transferred without having to scan the entire source drive, but it doesn't significantly change the amount of data that Veeam has to transfer.pearsondan99 wrote: The obvious answer at the source end is to upgrade to ESX 4 and convert the vms to version 7 which i assume will greatly reduce the incremental replication size, however in the interim is there a better way to do it other than the two options above?
The main reason for large amount of replication traffic are many small updates. Veeam uses a relatively large block size (1MB) so even small changes trigger relatively large transfers. Think about it this way, in the worst case scenario you could change just 4MB of data (1000 4K blocks) on a disk and Veeam would have to transfer 1GB of data (1000 1MB blocks). You can mitigate this slightly by making sure the volume is defragmented (which might help changes to be clumped closer together) and by disabling gratuitous updates like the "last accessed" time for NTFS files. This issue affects some storage systems that use large block based snapshots as well. For example, Equallogic uses a 16MB block size for snapshots. Here's an article based on improving Equallogic snapshots and replication that would likely help with Veeam as well. http://www.interworks.com/blogs/bfair/2 ... -snapshots
The best thing that helped us was WAN acceleration. We see roughly 95% compression on our Veeam replication with our WAN acceleration product. Hopefully Veeam will improve their replication to not send redundant data in the future (even if they continue to use 1MB block sizes they could easily modify their replication code to only send the differences between the blocks), but, if Veeam is generating more data than you can replicate, then I believe that WAN acceleration is the best current option, although it is quite expensive. Other options include replicating with Veeam locally, and then using tools like BigSync or Rsync, which only replicate the byte level changes, to replicate to the remote site.
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
So if you are replicating server roles such as file servers in companies with heavy transactional based work patterns (i.e. word/excel documents changing a lot daily) then using veeam to replicate the VM's is going to be costly based on the fact it will force large data change over the wire (which happens to be skinny in our case) - from the veeam side is there any plans to address this issue?
thanks for the info, we will have to do a less frequent replication & use another media to replciate the file changes - WAN compression is most likely out here as we are using an IPSEC VPN as the point to point link to the core (replication target)
thanks for the info, we will have to do a less frequent replication & use another media to replciate the file changes - WAN compression is most likely out here as we are using an IPSEC VPN as the point to point link to the core (replication target)
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
sorry on another note is there any where logged in Veeam when its running a job of how much data it calculates has changed? i.e. on our 500 GB file server example after the job is complete does it keep a record of how much data it replicated (changes) as the only measure i m currently using is our snmp management tool which is telling me how much data is going across the wire...
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Dan - yes, we have reduced the block size from 1MB to 256KB in v5 for improved support of such workloads. In fact, Tom was the one who originally requested this change, and helped us with some POC testing using experimental 256KB agent for v4 a few months ago.pearsondan99 wrote:from the veeam side is there any plans to address this issue?
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Unfortunately right now this information is only available in the debug logs. But you can also get an idea by looking at VRB size produced with the corresponding replication run.pearsondan99 wrote:sorry on another note is there any where logged in Veeam when its running a job of how much data it calculates has changed? i.e. on our 500 GB file server example after the job is complete does it keep a record of how much data it replicated (changes) as the only measure i m currently using is our snmp management tool which is telling me how much data is going across the wire...
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
The move to 256KB blocks will help some, but there will still be quite a bit of overhead with Veeam. Ideally Veeam could use any size block it wanted, but only actually send the bytes that changed within the blocks. That would be a really nice enhancement, especially for replication.
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
I had a look at the VRB size for the file server (500GB) that took around 7 hours @ 22MB (currently running locally). The VRB file size is around 3 GB which is interesting as for that job is generated a lot more data traffic while it was running (around 15GB) presumably there is a lot of traffic generated through compairsions between the orginal VM and the replica?Gostev wrote:
Happy to look at upgrading to v5 if its going to reduce the data replication volume size as our internet data charges will go through the roof at this replication rate change!
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Hmm, the difference between VRB size and traffic should not be as large with ESX target. Do you have service console connection settings specified in the target ESX host's properties (right-click the target ESX host in the Veeam Backup Servers tree and open its properties to check this).pearsondan99 wrote:I had a look at the VRB size for the file server (500GB) that took around 7 hours @ 22MB (currently running locally). The VRB file size is around 3 GB which is interesting as for that job is generated a lot more data traffic while it was running (around 15GB) presumably there is a lot of traffic generated through compairsions between the orginal VM and the replica?
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Well, the data is not compressed as it's read from the source, but should be compressed as it's written to the target. When you say there was 15GB of data, does that include both source reads as well as the target writes?pearsondan99 wrote:I had a look at the VRB size for the file server (500GB) that took around 7 hours @ 22MB (currently running locally). The VRB file size is around 3 GB which is interesting as for that job is generated a lot more data traffic while it was running (around 15GB) presumably there is a lot of traffic generated through compairsions between the orginal VM and the replica?
To give you an example of what I'm typically seeing, I'll use a small VM (20GB) that is replicated via a slow (2Mb) WAN link. We replicate this VM only once a day. Veeam typically reads about 2.5GB of data from disk, while the VRB file is only about 1GB. Thankfully, WAN acceleration limits the amount of data transferred via the WAN to ~150MB which is probably very close to the amount of data that actually changed. Based on how compressible and how much duplicate information there can be, the VBR can be significantly less that the amount of data read from the source disk.
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Tom, I think you are confusing this with backup. In case of replica, data is applied to replica VMDK file, which is not compressed. You are correct that replica VRB files do get compressed with replication, but it is not the case with replica VMDK file. And in cases when the service console agent is not enabled, Veeam Backup has to manipulate with uncompressed replica VMDK blocks over the network. This is why I have asked about service console connection settings.tsightler wrote:Well, the data is not compressed as it's read from the source, but should be compressed as it's written to the target.
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
I meant that the VBR would be compressed as it was written to the target, I understand that the VMDK file cannot be compressed. The OP asked if there was a way to determine the amount of data that was actually replicated because he's wanting to estimate the amount of data that will actually have to traverse his WAN and you indicated that he could get idea by looking at the VBR files. He then noted that the VBR files were much smaller than the amount of data that he saw actually transferred. My point was that the VBR files are not really a good way to see how much data is actually replicated because they are compressed and, if you're data is very compressible, the VBR can be much smaller than the amount of data that was actually transferred across the wire. In my experience, the VBR files are typically about 1/3 the size of the data transferred across the wire.
I did realize something when I typed this though. We mostly "pull" replicas with Veeam, we don't push them. Which mean Veeam is having to read the data uncompressed from the source ESX servers (via vStorage API network mode). We use the "pull" method because it's the only way to easily ensure that Veeam will be available to actually perform the "failover" task in the even of a disaster at the remote site. If we use the "push" method, where a Veeam server at the remote size "pushed" the replica's to the datacenter then Veeam would likely compress the data prior to sending it over the network. The disadvantage of this method is that, if the remote site experiences a disaster during a replication cycle, you may be left without a Veeam server capable of recovering the replica since I don't believe you can import a replica into another Veeam server. That means you have to have some "disaster" plan for your remote Veeam servers since they are the only systems which contain the critical information needed to recover a previous rollback if disaster strikes in the middle of a replication cycle.
I did realize something when I typed this though. We mostly "pull" replicas with Veeam, we don't push them. Which mean Veeam is having to read the data uncompressed from the source ESX servers (via vStorage API network mode). We use the "pull" method because it's the only way to easily ensure that Veeam will be available to actually perform the "failover" task in the even of a disaster at the remote site. If we use the "push" method, where a Veeam server at the remote size "pushed" the replica's to the datacenter then Veeam would likely compress the data prior to sending it over the network. The disadvantage of this method is that, if the remote site experiences a disaster during a replication cycle, you may be left without a Veeam server capable of recovering the replica since I don't believe you can import a replica into another Veeam server. That means you have to have some "disaster" plan for your remote Veeam servers since they are the only systems which contain the critical information needed to recover a previous rollback if disaster strikes in the middle of a replication cycle.
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
That's right... but unfortunately in this case source host is ESX 3.5, so no "pull" option...
-
- Influencer
- Posts: 15
- Liked: never
- Joined: Mar 10, 2010 1:01 am
- Full Name: Dan
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Ok so to summarize, confirmation steps to lower the data being transfered between source ESX and target ESX
- upgrade to ESX v4 at source
- ensure the console agent is used for optimal compression
- upgrade to v5 for improved block size (1mb to 256k)
would it be reasonable to assume these steps should resolve the data volume transfer issue?
thanks guys
- upgrade to ESX v4 at source
- ensure the console agent is used for optimal compression
- upgrade to v5 for improved block size (1mb to 256k)
would it be reasonable to assume these steps should resolve the data volume transfer issue?
thanks guys
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Ensuring that the console agent is used with target ESX is the key, also disabling last access on NTFS is a good idea. Finally, moving swap file to separate disk and exlcuding it from replication (search this forum to read up more on that) will obviously improve the traffic usage. All of this can be done today, and v5 should give additional improvement due to smaller block size.
Upgrading to ESX4 at source will not change the traffic amount in "push" scenario, but will provide much faster incremental sync cycles. "Pull" scenario enabled by having ESX4 at source is actually worse traffic-wise than "push" scenario with target being "fat" ESX w/service console agent, because there is no network traffic compression in this case.
And of course it should be understood that while all these steps may give significant improvement, they still don't have any magic behind, so if source VMs has many GB of data changed daily, this data will still need to be replicated to target site...
Upgrading to ESX4 at source will not change the traffic amount in "push" scenario, but will provide much faster incremental sync cycles. "Pull" scenario enabled by having ESX4 at source is actually worse traffic-wise than "push" scenario with target being "fat" ESX w/service console agent, because there is no network traffic compression in this case.
And of course it should be understood that while all these steps may give significant improvement, they still don't have any magic behind, so if source VMs has many GB of data changed daily, this data will still need to be replicated to target site...
-
- Influencer
- Posts: 21
- Liked: 1 time
- Joined: Mar 24, 2010 3:44 pm
- Full Name: Michael
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Gah, now you are starting to hurt my brain But looks like you might have some insights. Ahh I think I made the connection, is this your blog? http://www.tuxyturvy.com/blog/ I have read your stuff about EqualLogic replication etc... thanks for informative blog posts!tsightler wrote:
I did realize something when I typed this though. We mostly "pull" replicas with Veeam, we don't push them. Which mean Veeam is having to read the data uncompressed from the source ESX servers (via vStorage API network mode). We use the "pull" method because it's the only way to easily ensure that Veeam will be available to actually perform the "failover" task in the even of a disaster at the remote site. If we use the "push" method, where a Veeam server at the remote size "pushed" the replica's to the datacenter then Veeam would likely compress the data prior to sending it over the network. The disadvantage of this method is that, if the remote site experiences a disaster during a replication cycle, you may be left without a Veeam server capable of recovering the replica since I don't believe you can import a replica into another Veeam server. That means you have to have some "disaster" plan for your remote Veeam servers since they are the only systems which contain the critical information needed to recover a previous rollback if disaster strikes in the middle of a replication cycle.
I am in a similar situation trying to find a more efficient way to replicate data to a DR location. We purchased Veeam and I have been pretty pleased with it, but am trying to speed it up with some WAN optimization as well. I have been testing ESX vsphere W/ veeam through riverbeds through WANem http://wanem.sourceforge.net/ and was wondering what WAN optimization you use?
Would you recommend the pull method you mention here?
Thanks for any thoughts\hints!
-Michael
-
- Enthusiast
- Posts: 61
- Liked: 10 times
- Joined: Mar 01, 2010 5:57 pm
- Full Name: Glenn Santa Cruz
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
@Gostev & @tsightler -
Could you please help clarify what you mean by pull versus push replication? I thought I had a decent handle on it, but some of the comments were a bit contradictory with my understanding. This is important to us from a planning perspective, especially with regard for WAN optimization and how to ensure proper protection for the Veeam SQL databases.
For instance, could you comment on whether this is correct:
Assumptions for this example:
Could you please help clarify what you mean by pull versus push replication? I thought I had a decent handle on it, but some of the comments were a bit contradictory with my understanding. This is important to us from a planning perspective, especially with regard for WAN optimization and how to ensure proper protection for the Veeam SQL databases.
For instance, could you comment on whether this is correct:
Assumptions for this example:
- 1) Two separate environments ( physical datacenters, different rooms, etc. ): Datacenter1 (DC1) & Datacenter2 (DC2)
2) Both environments are running ESX classic ( ESX1 in DC1, ESX2 in DC2 )
3) Both environments are at vSphere 4.0u1, to remove any doubt regarding network throttling in the COS
4) Each environment has Veeam installed ( Veeam1 in DC1 & Veeam2 in DC2 )
5) Both Veeam servers are virtual machines, and configured to run jobs in Virtual Appliance mode
6) We want to replicate a virtual machine ("TestVM"), which is running in DC1 (on ESX1)
- 1) Pull Replication: the VM will be "pulled" from DC1 via Veeam2
- a) Veeam2 runs the replication job ; since TestVM is "distant" from Veeam2, we need a "helper" in order to access the TestVM data
b) Veeam2 will install a helper agent on ESX2 (to read VM data)
c) Veeam2 will also install a helper agent on ESX1 (the replication target)
d) ESX1 agent (as a client) will issue a connection to ESX2 agent (as a server), and retrieve data until replication completes
e) Veeam2 will monitor ESX2 agent for status of the replication job
- a) Veeam1 runs the replication job ; TestVM is "local" to Veeam1, so Veeam1 can access the data directly via SCSI hot-add
b) Veeam1 will install a helper agent on ESX1 (the replication target)
c) ESX1 agent (as a client) will issue a connection to Veeam1 (as a server), and retrieve data until replication completes
d) Veeam1 will monitor ESX1 agent for status of the replication job
- a) Veeam2 runs the replication job ; since TestVM is "distant" from Veeam2, we need a "helper" in order to access the TestVM data
- 1) Is this accurate? If not, can you please describe where it may be off, or modify it to be correct?
2) From our experience with the product, it looks like replication traffic ultimately is a "pull" (regardless of the above two scenarios, there's still an agent installed on the target ESX host, and that agent initiates a connection to the source server in order to begin data exchange). Can you confirm this?
3) For 1.d, are the helper agents compressing any of this network traffic?
4) For 2.c, does the Veeam server compress data as it's being transferred to the helper agent?
5) Is 1.c accurate, or does the Veeam server connect directly to the helper agent (at the source ESX1) and write resulting data directly to the target ESX2 (via SSH?)
5) What, if anything, changes if we choose a different "mode" for the replication job ( vStorage API (each option) vs. Network replication )?
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Yes, that's my blog.mellerbeck wrote: Gah, now you are starting to hurt my brain But looks like you might have some insights. Ahh I think I made the connection, is this your blog? http://www.tuxyturvy.com/blog/ I have read your stuff about EqualLogic replication etc... thanks for informative blog posts!
We use Cisco WAAS. We really liked the Riverbed solution, but I was never able to justify it. When we upgraded our Cisco routers a couple of years ago I was able to get approval for WAAS network modules in each one so it was basically easier to sell. We hit a lot of bugs and I opened a lot of tickets, but finally, after over a year, we're down to just one, and even that one is pretty obscure and has a reasonable workaround. I created a custom WAAS policy for Veeam which performs full optimization. We get well over 90% compression for all of our Veeam replication cycles, which is pretty good. Many times we get better then 95%. We get reliable replication over 2Mb links from Europe to our US datacenter, which is what we were looking for.mellerbeck wrote: I am in a similar situation trying to find a more efficient way to replicate data to a DR location. We purchased Veeam and I have been pretty pleased with it, but am trying to speed it up with some WAN optimization as well. I have been testing ESX vsphere W/ veeam through riverbeds through WANem http://wanem.sourceforge.net/ and was wondering what WAN optimization you use?
The main reason I'm using the "pull" method, which basically means that the Veeam server is on the target side of the replication link, is because I'm very concerned about what happens if a "disaster" strikes during a replication cycle. In other words, lets say I'm replicating a server from the EU to the US via our WAN, the replication cycle takes almost an hour to complete, but the "disaster" strikes at the 30 minute mark. As long as I have the Veeam server available, I can still failover to a previous rollback. Using the "push" method the Veeam server has to sit on the same side as the source server, which means a disaster that happens at the source site could easily take out your Veeam server as well, then all you'll be left with is a half copied replica file.mellerbeck wrote: Would you recommend the pull method you mention here?
I suppose if you use "Network Mode" this might be better, but this mode doesn't use block change tracking so that's a big negative. It feels like Veeam gives you two choices for replication, neither of which is ideal.
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Obviously I'm just a user, so my answers are as likely to be wrong as they are to be right, but I'll give my thoughts and we'll see.
First of all, for 1a), "we need a "helper" in order to access the TestVM data". I'm not sure I follow this. We still use vStorage API mode for our "pull" replication, it just falls back to NBD mode rather than SAN mode. That way we still get changed block tracking, and the console network performance is not a big deal for most of our replication since our WAN links are generally <10Mb anyway. I suspect you might be referring to "Network" mode, which would put a "helper agent" but has the disadvantage of needing to scan the entire VMDK.
Then, for 1b), "Veeam2 will install a helper agent on ESX2 (to read VM data)". While I agree that ESX2 would need a helper agent, wouldn't it be for writing data, not for reading it? Based on your layout I think your talking about pulling TestVM from ESX1 to ESX2 using Veeam2.
1c) "Veeam2 will also install a helper agent on ESX1 (the replication target)". I think this is true only for Network mode. Once again, we use vStorage API mode, and the source host does not get any agent.
Questions:
2) From our experience with the product, it looks like replication traffic ultimately is a "pull" (regardless of the above two scenarios, there's still an agent installed on the target ESX host, and that agent initiates a connection to the source server in order to begin data exchange). Can you confirm this?
When I'm using the term "push" or "pull" I'm referring exclusively to which side of the link the Veeam server is located on. I believe that, for safety sake, the Veeam server pretty much has to be on the side of the replica destination (the side to where you would failover). Otherwise, what happens when disaster strikes in the middle of a replication? If your Veeam1 server is located in DC1, and is in the middle of replicating TestVM when an explosion/flood/fire takes you DC offline, how do you failover? Since your Veeam2 server in DC2 knows nothing about the replica, and replica's can't be imported, you can't revert to a previous rollback. You've now got a nice, half-replicate VM at your DR site that's of no value at all.
3) For 1.d, are the helper agents compressing any of this network traffic?
4) For 2.c, does the Veeam server compress data as it's being transferred to the helper agent?
5) Is 1.c accurate, or does the Veeam server connect directly to the helper agent (at the source ESX1) and write resulting data directly to the target ESX2 (via SSH?)
6) What, if anything, changes if we choose a different "mode" for the replication job ( vStorage API (each option) vs. Network replication )?
I'll wait for you to clarify my existing questions prior to attempting to answer any of these, and actually Anton probably knows these answers better anyway. In general I believe that communication between agents, and between the Veeam server and agents are compressed. We don't use Network replication at all because the lack of changed block tracking causes the load on the back end storage to be too high.
You have me completely confused on this one, perhaps we're using different terms. My choice of the term "target" was probably poor because I know different product use this to mean different things (some for the source machine, other for the destination). I will try to remember to more clearly use "source" and "destination".glennsantacruz wrote:
- 1) Pull Replication: the VM will be "pulled" from DC1 via Veeam2
- a) Veeam2 runs the replication job ; since TestVM is "distant" from Veeam2, we need a "helper" in order to access the TestVM data
b) Veeam2 will install a helper agent on ESX2 (to read VM data)
c) Veeam2 will also install a helper agent on ESX1 (the replication target)
d) ESX1 agent (as a client) will issue a connection to ESX2 agent (as a server), and retrieve data until replication completes
e) Veeam2 will monitor ESX2 agent for status of the replication job
First of all, for 1a), "we need a "helper" in order to access the TestVM data". I'm not sure I follow this. We still use vStorage API mode for our "pull" replication, it just falls back to NBD mode rather than SAN mode. That way we still get changed block tracking, and the console network performance is not a big deal for most of our replication since our WAN links are generally <10Mb anyway. I suspect you might be referring to "Network" mode, which would put a "helper agent" but has the disadvantage of needing to scan the entire VMDK.
Then, for 1b), "Veeam2 will install a helper agent on ESX2 (to read VM data)". While I agree that ESX2 would need a helper agent, wouldn't it be for writing data, not for reading it? Based on your layout I think your talking about pulling TestVM from ESX1 to ESX2 using Veeam2.
1c) "Veeam2 will also install a helper agent on ESX1 (the replication target)". I think this is true only for Network mode. Once again, we use vStorage API mode, and the source host does not get any agent.
This still has me confused. Why would ESX1 be involved at all in this scenario? The VM is being pushed to DC2, where there is only ESX2. Wouldn't that make ESX2 the destination, and this the host that got the agent. The Veeam1 server would hot add the drive and "push" it to the agent on ESX2.glennsantacruz wrote: 2) Push Replication: the VM will be "pushed" to DC2 via Veeam1[/list]
- a) Veeam1 runs the replication job ; TestVM is "local" to Veeam1, so Veeam1 can access the data directly via SCSI hot-add
b) Veeam1 will install a helper agent on ESX1 (the replication target)
c) ESX1 agent (as a client) will issue a connection to Veeam1 (as a server), and retrieve data until replication completes
d) Veeam1 will monitor ESX1 agent for status of the replication job
Questions:
2) From our experience with the product, it looks like replication traffic ultimately is a "pull" (regardless of the above two scenarios, there's still an agent installed on the target ESX host, and that agent initiates a connection to the source server in order to begin data exchange). Can you confirm this?
When I'm using the term "push" or "pull" I'm referring exclusively to which side of the link the Veeam server is located on. I believe that, for safety sake, the Veeam server pretty much has to be on the side of the replica destination (the side to where you would failover). Otherwise, what happens when disaster strikes in the middle of a replication? If your Veeam1 server is located in DC1, and is in the middle of replicating TestVM when an explosion/flood/fire takes you DC offline, how do you failover? Since your Veeam2 server in DC2 knows nothing about the replica, and replica's can't be imported, you can't revert to a previous rollback. You've now got a nice, half-replicate VM at your DR site that's of no value at all.
3) For 1.d, are the helper agents compressing any of this network traffic?
4) For 2.c, does the Veeam server compress data as it's being transferred to the helper agent?
5) Is 1.c accurate, or does the Veeam server connect directly to the helper agent (at the source ESX1) and write resulting data directly to the target ESX2 (via SSH?)
6) What, if anything, changes if we choose a different "mode" for the replication job ( vStorage API (each option) vs. Network replication )?
I'll wait for you to clarify my existing questions prior to attempting to answer any of these, and actually Anton probably knows these answers better anyway. In general I believe that communication between agents, and between the Veeam server and agents are compressed. We don't use Network replication at all because the lack of changed block tracking causes the load on the back end storage to be too high.
-
- Enthusiast
- Posts: 61
- Liked: 10 times
- Joined: Mar 01, 2010 5:57 pm
- Full Name: Glenn Santa Cruz
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
First, with respect to terminology, I think we are on the same page in referring to the source and target environments. A "pull" would be done by a Veeam server residing at the target datacenter, and a "push" would be done by a Veeam server residing in the source datacenter.
I agree about leveraging vStorage API in order to get the benefit of CBT ; I should have pointed that out in the above.
You are correct. My scenario is wrong. 2.b and 2.c should refer to ESX2, not ESX1
I may also have a fundamental misunderstanding of vStorage API with respect to replication. You pointed out that replication will fall back to NBD mode in 1.a (instead of using a helper agent). Dumb question follows, but does this mean that the vStorage API fallback to NBD will allow for the Veeam server itself to "remotely" attach to the TestVM disks? In that case, I definitely see there's no need for an agent on the source side.
With regard to your point about failure mid-replication -- I also have considered the downside of this approach, and opted for a different method. Here's the layout in brief:
Although more complicated than a "pull", this method should allow for better overall reduction in network traffic. (And we have to prove that, which is the whole reason for my asking these questions in the first place - to get a good understanding of how replication works.)
I agree about leveraging vStorage API in order to get the benefit of CBT ; I should have pointed that out in the above.
You are correct. My scenario is wrong. 2.b and 2.c should refer to ESX2, not ESX1
I may also have a fundamental misunderstanding of vStorage API with respect to replication. You pointed out that replication will fall back to NBD mode in 1.a (instead of using a helper agent). Dumb question follows, but does this mean that the vStorage API fallback to NBD will allow for the Veeam server itself to "remotely" attach to the TestVM disks? In that case, I definitely see there's no need for an agent on the source side.
With regard to your point about failure mid-replication -- I also have considered the downside of this approach, and opted for a different method. Here's the layout in brief:
- 1. Veeam servers in both source and target datacenters.
2. Each Veeam server runs local SQLExpress
3. Each Veeam server is a VM ( in Virtual Appliance mode )
4. All replication jobs are "push" (source Veeam pushes to target datacenter)
5. We replicate in both directions ( some VMs from DC1 go to DC2, and some VMs go from DC2 to DC1 )
6. Each replication job is for a single VM
7. After each replication job completes, we issue a full database backup, compress the backup, then FTP it to the other Veeam server (in the target datacenter).
8. If we lose a datacenter, we clone the surviving Veeam (and apply a customization specification). This clone already contains the most recent database backup from the failed datacenter (since we FTP this backup each time), so we just restore SQLExpress from this backup (into the new Veeam server), and we have all of the jobs from the now-dead datacenter.
9. The newly created Veeam server can be used to failover the replicas to a known-good state (or roll them back, depending on requirements)
Although more complicated than a "pull", this method should allow for better overall reduction in network traffic. (And we have to prove that, which is the whole reason for my asking these questions in the first place - to get a good understanding of how replication works.)
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Yes, as far as I know, that's exactly what it means. NBD is "Network Block Device", it's been around in Linux for a long, long time. Basically it's a simple way to take a block device, or even a file (everything's a file in Unix/Linux), and allow another machine on the network to access it via simple TCP connection. You can think of it as iSCSI without all the overhead. VMware needed a way for VCB to access disk for non-SAN scenarios like customers with simple DAS setups, so they basically used NBD to access the underlying disks via the network and included this functionality within their ESX management agent. When vStorage API falls back to NBD mode, it talks directly to the VMware native agent. You can see it talk via TCP port 902, the very same port use for other VMware management task.glennsantacruz wrote: I may also have a fundamental misunderstanding of vStorage API with respect to replication. You pointed out that replication will fall back to NBD mode in 1.a (instead of using a helper agent). Dumb question follows, but does this mean that the vStorage API fallback to NBD will allow for the Veeam server itself to "remotely" attach to the TestVM disks? In that case, I definitely see there's no need for an agent on the source side.
How exactly can this "cloned" Veeam server rollback the "in-process" replica that it doesn't know about (it can't know about it because you don't send the DB backup until after the replica completes)? The replica that was in-process at the time of the failure has already created a new "VBR" file, and already written changed blocks to the VMDK file and moved old blocks to the VBR rollback file. The database that you "cloned" only has data about the previous replica and believe that the VMDK is in a "clean" state. It's not aware that a new replication cycle has started, and that this didn't complete, and that it needs to "repair" the VMDK by rolling the changed blocks from the failed VBR back into the VMDK.glennsantacruz wrote: ...I clipped the steps, I understand them as we considered this very thing, but...
If we do have a failure mid-replication, we will already have the "last known good" state of the database and can rollback replicas using that database. If we have a network failure during the transmission of the database backup itself, we still have the "last known good" database. However, we cannot properly rely on replica rollback points because the database is out-of-date with respect to the actual on-disk replica. In this situation, we can only failover the replicas to "current state".
I agree that this should allow for better overall reduction in network traffic, however, I think you're leaving yourself a recovery hole that you wouldn't be able to recover from, but perhaps I'm missing something important.glennsantacruz wrote: Although more complicated than a "pull", this method should allow for better overall reduction in network traffic. (And we have to prove that, which is the whole reason for my asking these questions in the first place - to get a good understanding of how replication works.)
-
- Enthusiast
- Posts: 61
- Liked: 10 times
- Joined: Mar 01, 2010 5:57 pm
- Full Name: Glenn Santa Cruz
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Point taken on the NBD explanation. That clarifies *a lot* of misunderstanding on my part.
Please tell me I'm wrong and that this is a recoverable/safe situation. I do understand your point regarding our approach -- and I think we must certainly revisit our approach in great detail, as it appears to be fatally flawed. My concern now, however, is how to adopt the "pull" approach safely. I just don't see how either approach (push or pull) can be guaranteeed safe if something bad happens mid-stream.
This doesn't feel so good anymore. Your point implies that Veeam is applying replication changes *in real time* to the resultant vmdk files; this is contrary to what I had understood (until now). We had come from a different backup product (esxpress), so our conceptual model was slightly different (and, to be fair, there's really no "architectural / concepts guide in the Veeam literature to explain all of this -- yes, there's a forum post explaining how synthetic backups work, but it is an overview, not low-level). Until now, I had the impression that the vmdk is held "golden" until replication completes, at which point changes are applied. From your description, it appears that the vmdk is tainted during the replication cycle itself (changes are written directly into the vmdk as replication progresses). That is indeed dangerous, and exposes a complete catastrophic flaw in our design. (Thank you). However, I do wonder how you are avoiding this as well? For instance, you're pulling replicas -- but there's no difference in how the vmdk gets changed in that scenario. So if you lose a datacenter mid-replication, you have a corrupt vmdk. Rollback to former version will not work properly in this case, since the "vrb" file will not be consistent either (nor will the database reflect current state of the now-failed replica). Simply put, the replica job could have modified block "5698", but the corresponding "vrb" file might not know about that block at the time of failure...tsightler wrote: How exactly can this "cloned" Veeam server rollback the "in-process" replica that it doesn't know about (it can't know about it because you don't send the DB backup until after the replica completes)? The replica that was in-process at the time of the failure has already created a new "VBR" file, and already written changed blocks to the VMDK file and moved old blocks to the VBR rollback file. The database that you "cloned" only has data about the previous replica and believe that the VMDK is in a "clean" state. It's not aware that a new replication cycle has started, and that this didn't complete, and that it needs to "repair" the VMDK by rolling the changed blocks from the failed VBR back into the VMDK.
Please tell me I'm wrong and that this is a recoverable/safe situation. I do understand your point regarding our approach -- and I think we must certainly revisit our approach in great detail, as it appears to be fatally flawed. My concern now, however, is how to adopt the "pull" approach safely. I just don't see how either approach (push or pull) can be guaranteeed safe if something bad happens mid-stream.
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
BTW, there's a nice article on NBD in Linux Journal at http://www.linuxjournal.com/article/3778glennsantacruz wrote:Point taken on the NBD explanation. That clarifies *a lot* of misunderstanding on my part.
It's not specific to ESX or anything, but it's got some good background on NBD and gives you some use cases that are very similar in concept to the method used by VMware. Might be overkill to read the entire article, but interesting nonetheless.
Right, they're changing blocks in the VMDK. Actually, there are three files that are being updated during a replica cycle, there's of course the target VMDK, there's a "replica.vbk" file which appears to contain metadata about the state of the replica, including, perhaps, some type of transaction number, and there's the "VRB" file which holds the rollback blocks.glennsantacruz wrote: This doesn't feel so good anymore. Your point implies that Veeam is applying replication changes *in real time* to the resultant vmdk files; this is contrary to what I had understood (until now). We had come from a different backup product (esxpress), so our conceptual model was slightly different (and, to be fair, there's really no "architectural / concepts guide in the Veeam literature to explain all of this -- yes, there's a forum post explaining how synthetic backups work, but it is an overview, not low-level). Until now, I had the impression that the vmdk is held "golden" until replication completes, at which point changes are applied. From your description, it appears that the vmdk is tainted during the replication cycle itself (changes are written directly into the vmdk as replication progresses). That is indeed dangerous, and exposes a complete catastrophic flaw in our design. (Thank you).
I guess it's possible I only "think" I'm avoiding it. However, I believe that this is what happens. To take your example, if block 5698 is changed, Veeam will copy and write this block to the VRB, then write the new block 5698 to the VMDK. This assures that, prior to the block being changed in the VMDK, it is safely in the the VRB file. If a replication is interrupted in process, the engine will see this "incomplete" VRB file and "rollback" these blocks back into the VMDK file prior to performing a failover or starting a new replication pass, in the same way that it repair an incomplete VBK file from an interrupted backup. Now, it's possible that the engine will do this even if it's "unaware" of a new replica session, and that you scenario is still safe, but when we tested replication, it appeared that if Veeam was not aware that the replica ended in a "errored" state it simply trusted that the VMDK was good and would simply attempt to activate this replica with the tainted VMDK. This didn't seem like a safe situation to us, so we opted for the "pull" method, which seemed to work correctly, repairing the VMDK prior to failover.glennsantacruz wrote: However, I do wonder how you are avoiding this as well? For instance, you're pulling replicas -- but there's no difference in how the vmdk gets changed in that scenario. So if you lose a datacenter mid-replication, you have a corrupt vmdk. Rollback to former version will not work properly in this case, since the "vrb" file will not be consistent either (nor will the database reflect current state of the now-failed replica). Simply put, the replica job could have modified block "5698", but the corresponding "vrb" file might not know about that block at the time of failure...
Perhaps I'm wrong and maybe it doesn't, but our testing indicated that, as long as the Veeam server was aware that the replica state ended in "error" it was able to recover the VMDK to the previous state by rolling back the "incomplete" VRB file during the failover process. We couldn't figure out any way to do this with the "push" method assuming the loss of the Veeam server at the source site, so we gave up the bandwidth for what we felt was the safer alternative. We have considered usingglennsantacruz wrote: Please tell me I'm wrong and that this is a recoverable/safe situation. I do understand your point regarding our approach -- and I think we must certainly revisit our approach in great detail, as it appears to be fatally flawed. My concern now, however, is how to adopt the "pull" approach safely. I just don't see how either approach (push or pull) can be guaranteeed safe if something bad happens mid-stream.
I'd much prefer the "push" method because it was quite a bit more bandwidth friendly. It'd be nice if someone from Veeam would chime in on this. It would also be nice if there was a "Veeam Replication" whitepaper that covered all the options and advantages/disadvantages of the various scenarios.
-
- Enthusiast
- Posts: 61
- Liked: 10 times
- Joined: Mar 01, 2010 5:57 pm
- Full Name: Glenn Santa Cruz
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
I wholeheartedly agree that a "Veeam Replication" whitepaper would be very worthwhile (and likely help avoid quite a lot of time back-and-forth on the forums....)
If Veeam is indeed writing the replication block to the vrb file first, and only subsequently writing into the vmdk upon successful write to the vrb, I would see the pull approach being more resilient to in-transit replication failure. I realize that we have talked ourselves into this point, and that a Veeam engineer should weigh in here to help clarify this. However, assuming that this is indeed the expected behavior, I'm afraid we must also use the "pull" technique instead of "push". In our book, at least 90% of the reason for replication is to ensure a valid environment in the event of failure; if that failure event itself introduces corruption to the replicas, then we're just wasting time and effort. Granted, we're describing an issue that would present itself only during the replication window, but Murphy is a good friend of ours...
Can Veeam please interject here and give a good solid engineering explanation of how Veeam is doing replication, so we can plan accordingly?
If Veeam is indeed writing the replication block to the vrb file first, and only subsequently writing into the vmdk upon successful write to the vrb, I would see the pull approach being more resilient to in-transit replication failure. I realize that we have talked ourselves into this point, and that a Veeam engineer should weigh in here to help clarify this. However, assuming that this is indeed the expected behavior, I'm afraid we must also use the "pull" technique instead of "push". In our book, at least 90% of the reason for replication is to ensure a valid environment in the event of failure; if that failure event itself introduces corruption to the replicas, then we're just wasting time and effort. Granted, we're describing an issue that would present itself only during the replication window, but Murphy is a good friend of ours...
Can Veeam please interject here and give a good solid engineering explanation of how Veeam is doing replication, so we can plan accordingly?
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
I realized I left an incomplete thought in the last email. What I was going to say was that we considered using the "push" method along with SQL transactional replication. The idea is basically to have a standby Veeam server at the remote site, but with all of the Veeam services stopped. Then configure MSSQL to transactionally replicate the database to the remote Veeam server. In the event of a disaster that takes out the master datacenter, including the Veeam server, you simply start the Veeam services on the standby server. Since the database was replicated transactionally, it will be aware of the entire state of your Veeam backups and replicas and should be able to "do the right thing". This seems like it gives us the best of all worlds.
What do you think?
What do you think?
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Tom is absolutely correct above explaining how Veeam replication works in transactions and why failover is always possible to previous "good" state even in case disaster hits the replication cycle in the middle - btw Glenn, we did discuss this scenario of failed replication cycle with you just a few weeks ago
Tom's summary in the last post is also correct, and actually this is exactly how most customers having "fat" ESX as replication target are doing this based on my knowledge ("push" Veeam replication, plus SQL replication of Veeam database). As I have already mentioned in the thread referenced above in this post, we even have white-paper about this that provides step-by-step instructions of setting up this scenario with SQL replication.
Tom's summary in the last post is also correct, and actually this is exactly how most customers having "fat" ESX as replication target are doing this based on my knowledge ("push" Veeam replication, plus SQL replication of Veeam database). As I have already mentioned in the thread referenced above in this post, we even have white-paper about this that provides step-by-step instructions of setting up this scenario with SQL replication.
-
- VP, Product Management
- Posts: 6032
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
So where's the white-paper? Whitepaper's that no one knows about aren't really worth creating. Saying you don't distribute them on the web site seems silly, how would I know to ask my salesperson about it if I don't know it exist. The "best practice" issue seems to be one of Veeam's weakest points. Is this possibly to encourage customers to use Veeam partners for implementation services? I guess I could see that, but as a "do-it-ourselves" type of IT department, I want all the docs I can get.
-
- Chief Product Officer
- Posts: 31766
- Liked: 7266 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Best replication compression ESX 3.5 > ESX 4 enviornment
Also, some notes on "pull" replication:
Advantages
- One-click failover in case of production site loss, without having to maintain replica of Veeam Backup server in the DR site.
- In case of ESXi targets: keeps VMDK rebuild traffic local to target site (not across WAN); meaning 3x less traffic during incremental sync (read replaced block - write to VRB - write new block to VMDK).
Disadvantages
- Cannot do replica seeding in such configuration
- No network traffic compression (not applicable for ESXi targets)
- No direct SAN access, has to read source over network
- Only possible with ESX4 on source (requires CBT)
Advantages
- One-click failover in case of production site loss, without having to maintain replica of Veeam Backup server in the DR site.
- In case of ESXi targets: keeps VMDK rebuild traffic local to target site (not across WAN); meaning 3x less traffic during incremental sync (read replaced block - write to VRB - write new block to VMDK).
Disadvantages
- Cannot do replica seeding in such configuration
- No network traffic compression (not applicable for ESXi targets)
- No direct SAN access, has to read source over network
- Only possible with ESX4 on source (requires CBT)
Who is online
Users browsing this forum: Bing [Bot] and 33 guests