Discussions specific to the VMware vSphere hypervisor
Post Reply
unsichtbarre
Expert
Posts: 159
Liked: 32 times
Joined: Mar 08, 2010 4:05 pm
Full Name: John Borhek
Contact:

Networking Best Practices for Replication

Post by unsichtbarre » 5 people like this post

I know this topic comes round from time-to-time, but I thought I would share some conclusions (hopefully get others conclusions too) about Network and Technical Best Practices for Replication and Remote Backup.

Our preferences for establishing the Site-to-Site connection:
  • We prefer IPsec VPN to MPLS
    • MPLS is ISP dependent
      IPsec VPN can not only be configured to automatically fail-over between ISP’s, but can also follow the Sites from one location to another in the event of relocation.
    We prefer Software VPN to Hardware VPN
    • We have found that even relatively hi-end hardware firewall/VPN does not have the horsepower to maintain VPN throughput over about 40-60 Mbps (regardless of proximity or WAN speed)
      Software VPN (and there are many enterprise options available plus grow-your-own based on BSD, RHEL, CentOS) seems to be limited only by how much horsepower (vCPU) you wish to devote to it.
We use iperf to test and baseline available bandwidth from point-to-point. Higher-bandwidth connections are more difficult to test accurately. Note: IP address is example only
  • UDP (server side: iperf –s -u)
    iperf -c 123.124.125.126 -b 20M -t 30
    iperf -c 123.124.125.126 -b 100M -t 30
    iperf -c 123.124.125.126 -b 200M -t 30
    Non-UDP (server side: iperf –s)
    iperf -c 124.125.126 -t 30 -i 2
    iperf -c 123.124.125.126 -t 30 -i 2 -P 2
    iperf -c 123.124.125.126 -t 30 -P 10
    iperf -c 123.124.125.126 -t 30 -P 40
For throughput VMN or MPLS traffic, we have come to expect sustained rates about 75%-85% of our WAN connectivity on 100+ Mbps connections.
  • You can get much closer to “stated bandwidth” on slower connections.
    Maintaining Replication and Remote Backup becomes impractical with site-to-site connections less than 20 Mbps and almost impossible with connections less than 10 Mbps.
    While, in most cases, day-to-day replication will not consume much bandwidth, operations such as seeding, reprotection and full backup will be rendered impossible with lower bandwidth connections
The Veeam Server itself:
  • We have always preferred to place the Veeam Server at the DR/Replica site. In the event of bi-directional replication, we would place one Veeam Server at each side.
    • Keeping the Veeam server at the Replica side and “pulling” the Replicas makes it possible to maintain the consistency of the Replica VM disks in the event of an actual disaster.
      While a Replica is technically powerable with vCenter, it has approximately as many Snapshots (plus or minus a few due to ongoing replication or failover) as there are configured restore points. If possible, do not perform VM operations outside of Veeam as this will leave the VM disks in an “inconsistent state” and render the replica unmanageable with Veeam.
    We build our Veeam Server with C:\ = 50GB and D:\ = (minimum) 20GB
    • For Replica-only instances of Veeam, this allows the default backup repository to exist on its own disk
    Up to about 8 vCPU, we prefer to increase our Veeam Server vCPU and facilitate more concurrent Veeam jobs on the localhost Veeam Proxy, before we provision an additional Veeam Proxy server.
    • Over about 8 vCPU and 8 Max concurrent tasks and we experience a diminishing return in our environment.
    Provision one Veeam Server for each source-side location/network
    • This makes management much easier, especially in the event of Veeam as a Managed Service
    Provision VLAN’s at your DR/Replica site proportionate to the requirements of source-side sites.
    • This makes management of multiple source-side sites possible
    Place the Veeam Server on the network (VLAN) that is routed to the source-side site and multi-home it to the management network at your DR/Replica site.
    • Use hosts files for DNS on the Veeam Server
      Change Adapters and Bindings to prioritize the network where replication traffic will be carried (this I got 2/3/15 from Ken Sauer at Veeam Support)
      This is only applicable to multi-homed instances of Veeam Server
Veeam Proxy Servers:
  • We have grown an intense distaste for co-locating the Veeam Proxy role with anything else. Where possible, we will always configure dedicated Veeam Proxies.
    • Environments that leverage Datacenter licensing for Windows Server do not pay a penalty for standing-up additional Windows Servers
    Place at least one dedicated Veeam Proxy at the source-side
    At the DR/Replica site, use the Veeam Server as proxy up to about 8 vCPU and 8 Max concurrent tasks
    For replication, we prefer NBD (Network) mode for replication.
    • We understand improvements to Virtual Appliance (Hot Add) and Direct SAN access in Veeam Backup and Replication Version 8, but we have been badly burnt by Hot Add in the past (leaving orphan disks all over the place). We are going to keep an open-mind and watch the forums.
    All of the same standards would apply for Veeam Servers, if the proxy is multi-homed
Network Settings for all Windows Servers running Veeam:
CMD
  • netsh hint tcp set global dca=enabled
    netsh hint tcp set global rss=enabled
    netsh hint tcp set global chimney=disabled
    netsh hint tcp set global autotuninglevel=disabled
    netsh hint tcp set global congestionprovider=none
    netsh hint tcp set global ecncapability=disabled
    netsh hint tcp set global taskoffload=disabled
    netsh hint tcp set global timestamps=disabled

Adapter Properties
  • Receive Side Scaling = Enabled
    Disable all Power Management


References:
http://www.vmware.com/files/pdf/techpap ... kloads.pdf

http://lifeofageekadmin.com/optimal-net ... s-2008-r2/

http://lifeofageekadmin.com/network-per ... r-2008-r2/

http://kb.vmware.com/selfservice/micros ... Id=1010071
-The Invisible Admin-
http://www.johnborhek.com

dellock6
Veeam Software
Posts: 6019
Liked: 1831 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Networking Best Practices for Replication

Post by dellock6 »

Thanks for sharing John, really nice tips :)

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2021
Veeam VMCE #1

Post Reply

Who is online

Users browsing this forum: No registered users and 25 guests