Looking for potential bottlenecks in proposed architecture

Availability for the Always-On Enterprise

Looking for potential bottlenecks in proposed architecture

Veeam Logoby mikeely » Fri Jan 06, 2017 8:18 pm

Environment is a single vSphere 6.5 cluster (4 hosts) with about 100 VMs eating just shy of 3 TB on a Tintri array. Daily churn seems to be somewhere around 10% or so before compression/dedupe based on the test backups I've got running. Network is 10G throughout and is robust with hugepages and storage VLANs configured. VMware environment is healthy and performs well.

One of the stacks is going to be somewhat remote, so having the ability to deal with failed disks without immediate access is important.

Aside from VMware and related (Veeam) stuff, we're 99% Linux so every Windows machine we are forced to add is a Martian in terms of licensing, management, etc.

Here's the proposed Veeam stack, with decision points that I'm unsure of in red:

Veeam manager: Server 2012 R2 VM. Current test environment runs 2 cores and 8GB RAM. Enough?
Backup proxies: Server 2012 Core VMs. We need N number, each with 2 cores and 4GB RAM(?).
Backup repo: Linux VM(s - how many?) serving as headend for NFS/ISCSI from filer.
Filer: 24-bay FreeNas with boatloads of RAM and SSDs for slog. Spinning drives will be 4TB each and mostly likely configured as 11 2-way zvols with 2 spare disks (although we might go with 3-ways, or do we want to go with a couple 9-wide RAIDZ3 instead?)

Obviously the first goal is to get through incrementals every day with plenty of room to spare. Based on the above and the experience of the community, where will bottlenecks most likely appear?

One question of note: how many write streams would we expect to see coming in to the filer during backup runs, and where would one expect the point of diminishing returns in performance to happen as the number of streams increases?

Also, is there any movement in getting Linux-based backup proxies? Such a development would be most welcome :)
mikeely
Enthusiast
 
Posts: 51
Liked: 10 times
Joined: Mon Nov 07, 2016 7:39 pm
Full Name: Mike Ely

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby DaveWatkins » Fri Jan 06, 2017 10:30 pm 1 person likes this post

This might be an odd suggestion, but I'd be temped to make the freenas box a windows 2016 server and have it perform repo and proxy duties (DirectSAN proxy) connected over the 10Gb network to the SAN. You don't mention it's hardware but it may be good enough to perform the proxy roles without anything more as well as being the repo. The SSD's could be configured as a cache even. This will take all the network load off the ESX hosts since backup data will move directly from the SAN to the repository.

You could then use storage spaces and ReFS and get all the benefits of that with respect to I/O reduction and space savings from synthentic full's as well as the advanced parity features of storage spaces in 2016.

I'm surprised you have VMWare 6.5 running as it's not supported by Veeam yet. 9.5U1 will bring that support

You could still run the B&R server as a VM without issue or you could integrate it into the physical ox as well which would give you a single windows host to manage and all the native features that have just been introduced to improve speed and backup space
DaveWatkins
Expert
 
Posts: 271
Liked: 67 times
Joined: Sun Dec 13, 2015 11:33 pm

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby nmdange » Fri Jan 06, 2017 10:34 pm 1 person likes this post

I would highly highly recommend using a physical server running Windows Server 2016 for your backup repository, with the repository drive formatted as ReFS. You can install Veeam on this host instead of in a VM, so you won't need an additional Windows license. This also has the benefit of making recovering from a complete failure of your vSphere environment easier as your Veeam host isn't running in your production VM environment.

Edit: Dave beat me to it :mrgreen:
nmdange
Expert
 
Posts: 215
Liked: 59 times
Joined: Thu Aug 20, 2015 9:30 pm

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby mikeely » Fri Jan 06, 2017 10:37 pm

Thanks for the suggestion. We're running the RC and I'm happy to report that it's fine so far.

With regard to putting Windows on the hardware we intend to deploy as a FreeNAS server, that's unlikely - see above about Windows being a martian in this environment. It's bad enough that we have to run that OS at all for this project :(

In addition, making it a Windows box would make it much less useful to us for any other random storage needs we might want to address with it.
Unless otherwise specified, I am asking about something pertaining to Linux. We use Windows as infrequently as possible, and enthusiastically seek ways to reduce that usage further.
mikeely
Enthusiast
 
Posts: 51
Liked: 10 times
Joined: Mon Nov 07, 2016 7:39 pm
Full Name: Mike Ely

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby DaveWatkins » Sat Jan 07, 2017 9:03 pm

mikeely wrote:Thanks for the suggestion. We're running the RC and I'm happy to report that it's fine so far.

With regard to putting Windows on the hardware we intend to deploy as a FreeNAS server, that's unlikely - see above about Windows being a martian in this environment. It's bad enough that we have to run that OS at all for this project :(

In addition, making it a Windows box would make it much less useful to us for any other random storage needs we might want to address with it.


That was one of my points, assuming that physical box has the resources, it could be the single only Windows server. You wouldn't need additional proxies or master server VM's, it could do it all.

Putting production data on your backup storage is generally a bad idea, if that storage fails you've then lost production data and the backups of it.

My personal rule for backups is they must be as simple as possible to restore from. In a disaster situation (and I've been through a couple of natural disasters which have reinforced this), you want recovery to be as simple as possible. If you have to build and configure a linux headend to put in front of the filer, then some proxies before you can start restoring data that is going to significantly slow down recovery, with everything on your physical box you could actually instant restore critical VM's immidiately
DaveWatkins
Expert
 
Posts: 271
Liked: 67 times
Joined: Sun Dec 13, 2015 11:33 pm

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby Andreas Neufert » Mon Jan 09, 2017 11:08 am 1 person likes this post

Veeam manager: Server 2012 R2 VM. Current test environment runs 2 cores and 8GB RAM. Enough?

OK (but see below)
Backup proxies: Server 2012 Core VMs. We need N number, each with 2 cores and 4GB RAM(?).

Rule of tumb: 40VMs per Core in a 8 hour backup window. => 3 Cores 6GB RAM needed.
Backup repo: Linux VM(s - how many?) serving as headend for NFS/ISCSI from filer.

1
Filer: 24-bay FreeNas with boatloads of RAM and SSDs for slog. Spinning drives will be 4TB each and mostly likely configured as 11 2-way zvols with 2 spare disks (although we might go with 3-ways, or do we want to go with a couple 9-wide RAIDZ3 instead?)

Amount of redundancy depends on rebuild time in case one disk is lost. If the rebuild process takes days, double redundancy is needed.

Overall I would do the followinf in your situation:

Run a single Server as B&R and Proxy. Add 8vCPUs 12 GB RAM. As the resources are only consumed when the backup job runs and you asked for some extra spare backup window time, this is a good configuration.
You can install the Linux Repository Server maybe on the FreeNAS directly? Depends on the speed and RAM of that box.
Andreas Neufert
Veeam Software
 
Posts: 2250
Liked: 374 times
Joined: Wed May 04, 2011 8:36 am
Location: Germany
Full Name: @AndyandtheVMs Veeam PM

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby mikeely » Tue Jan 10, 2017 7:33 pm

Andreas Neufert wrote:OK ...

That was a great post. Thanks!

What is your experience with the number of incoming write streams to ZFS? Will it handle what we're doing well enough?
Unless otherwise specified, I am asking about something pertaining to Linux. We use Windows as infrequently as possible, and enthusiastically seek ways to reduce that usage further.
mikeely
Enthusiast
 
Posts: 51
Liked: 10 times
Joined: Mon Nov 07, 2016 7:39 pm
Full Name: Mike Ely

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby skrause » Tue Jan 10, 2017 8:23 pm 1 person likes this post

2 cores is definitely a bit underwhelming for the B&R server since it has to run SQL server as well as the direct Veeam tasks. I run my Veeam management VMs with 4C/16GB RAM (could probably do 12 and be just fine).

Since you are not a Windows shop and all of your networking is 10GB, you can probably get away without using HotAdd proxies at all and just use the B&R server as a proxy like Andreas mentioned doing (I guess you could do HotAdd with the B&R server in the proxy role, I have never done it myself but I assume it would work) and use network mode.
Steve Krause
Veeam Certified Architect
skrause
Expert
 
Posts: 307
Liked: 45 times
Joined: Mon Dec 08, 2014 2:58 pm
Full Name: Steve Krause

Re: Looking for potential bottlenecks in proposed architectu

Veeam Logoby Andreas Neufert » Wed Jan 11, 2017 9:05 am

skrause wrote:g (I guess you could do HotAdd with the B&R server in the proxy role, I have never done it myself but I assume it would work)

Yes, that is possible and a typical configuration.
Andreas Neufert
Veeam Software
 
Posts: 2250
Liked: 374 times
Joined: Wed May 04, 2011 8:36 am
Location: Germany
Full Name: @AndyandtheVMs Veeam PM


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Bing [Bot], foggy, MichaelCade, PTide, TitovLab, v.Eremin and 63 guests