Comprehensive data protection for all workloads
Post Reply
tom11011
Expert
Posts: 142
Liked: 2 times
Joined: Dec 01, 2010 8:40 pm
Full Name: Tom
Contact:

Performance issues

Post by tom11011 » Jan 26, 2012 12:56 pm

We have made 2 changes to our backup environment and are now experiencing performance issues.

First change, we switched from backups to local storage to backups to a new nas server with a cifs share. To explain further, we were backing up to a small physical server with only 4 TB of storage. We ran out of room so bought a new nas device with 20TB. I'm thinking maybe I should install the veeam software on the new nas device instead of leaving it on the old server. But the catch is we plan to place this new nas device in a separate location and do backups accross a wan.

Second change, we upgraded from v5 to v6 as part of this project.

So a few questions

1.) Is it better to install veeam on a virtual machine? Or is it better to install veeam on a separate physical server, but use its local storage instead of writing the data to cifs share on a different server?

2.) we really like the reverse incremental, can we keep this if we are to backup accross a wan?

3.) What is the best way to set this up knowing that we want to separate the backups location from the production vmware environment accross a wan? Where should the veeam install itself live?

Thanks for your help.

foggy
Veeam Software
Posts: 18264
Liked: 1561 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Performance issues

Post by foggy » Jan 26, 2012 2:11 pm

Tom, a couple of considerations below..

1.) and 3.) With the new v6 architecture, placement of the backup server itself (production or DR site) no longer matters as all the actual processing is performed by the so called proxies (you can read about new concept in the sticky FAQ topic and product documentation). What is really more important is whether to set up physical or virtual backup proxies. Here you can find a good discussion regarding this (nevermind it is for earlier versions, just replace 'backup server' with 'proxy' while reading).
Regarding the CIFS target, apparently this is the most common type of target among our customers.

2.) Don't see the reason why you cannot keep it.

Regarding overall performance issues you are experiencing, it's hard to advice without seeing the bottleneck stats numbers you are getting. But note that with the new v6 engine and architecture, the balance between all environment components (source/proxy/network/target) is shifting and in fact you sometimes get performance decrease due to the actual performance improvements ;) (some details in this topic).

tom11011
Expert
Posts: 142
Liked: 2 times
Joined: Dec 01, 2010 8:40 pm
Full Name: Tom
Contact:

Re: Performance issues

Post by tom11011 » Jan 27, 2012 3:43 am

Thanks for your response.

I have decomissioned the old backup server with 2 2TB drives and installed the veeam software directly on the new nas appliance with 12 2TB drives in a raid 5. It is way faster. Faster than I have seen before. Seems to be writing a sustained 68MB/s which I have not seen before.

My feeling is I am benefiting by going from 2 2TB drives in a raid 0 to 12 2TB drives in a raid 5.

Additionally, I have to believe that writing directly to disk is better than writing across the network to a cifs share right?

I'll be curious to here more opinions on peoples setups, trials, and conclusions. At some point, our new server will move across the wan and be connected with a 100mb link, I'm sure that will probably increase backup times though.

Vitaliy S.
Product Manager
Posts: 22991
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Performance issues

Post by Vitaliy S. » Jan 27, 2012 8:51 am

tom11011 wrote:Additionally, I have to believe that writing directly to disk is better than writing across the network to a cifs share right?
Yes, direct connection should be always faster (depends on your hardware) than writing data through a network stack.

Post Reply

Who is online

Users browsing this forum: FAAGROUP and 22 guests