Comprehensive data protection for all workloads
Post Reply
Bygmait
Novice
Posts: 7
Liked: never
Joined: Jul 11, 2011 1:13 pm
Full Name: BygmaIT
Contact:

Direct SAN Access - Speed problems.

Post by Bygmait » Jul 26, 2011 2:05 pm

Hello

This is my first time here in the support forum, so if there is missing some information, then let me know.

My questions goes like this.

I got an Veeam B&R installation on a dedicated backup server, with a IBM HBA 8Gb card installed.
The backup server is running Windows Server 2008 R2

The SAN is an IBM DS3524, running 8Gb FC onto an 8Gb SAN Switch.

My question is then.

What speeds should I expect when I am running an backup?
The total size of VMs is: 5,79TB
Totally there is 63 vm to process.

The local disc system is an raid 5 on 6gbit SAS discs, and the SAN in also running raid 5 with 6gbit discs.

How long time should I expect for an full backup?
And the average processing rate..?

Hope that someone can help me with some information about speed and time for my backup.

Thanks

Vitaliy S.
Product Manager
Posts: 23073
Liked: 1582 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct SAN Access - Speed problems.

Post by Vitaliy S. » Jul 26, 2011 2:59 pm

Hello,

Basically, performance bottlenecks can be caused by three main reasons, which are described in the following thread: New user, need help optimizing backups

As regards expected performance rates, please take a look at our [FAQ] v5 : Frequently Asked Questions > Answers topic.

Also make sure you have configured your backup server properly to work in SAN mode and haven't failed over to a Network mode while running backup jobs. Thanks.

Bygmait
Novice
Posts: 7
Liked: never
Joined: Jul 11, 2011 1:13 pm
Full Name: BygmaIT
Contact:

Re: Direct SAN Access - Speed problems.

Post by Bygmait » Jul 27, 2011 10:25 am

Hi

Thanks for the feedback.

I have now lookt at my backup server, and found out, that it was running at 100% cpu usage, when backing up the heavy vms.

If my vms if large, then a backup is taking long time, and if it is small, then the time is low.
This is when I was running with a "Optimal" compression.

I have tried to set compression to "None" and my backup time for an 530gb vm is down to: 22min.
Before, with optimal compression, the same machine tookt 3 hours and 39min.

So either the compression is slowing everything up, and cost alot of time, or then there is something wrong.

My backup server runs with Xeon E5606 @ 2.13Ghz 4 cores, and 12gb DDR 3 memory.

Vitaliy S.
Product Manager
Posts: 23073
Liked: 1582 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Direct SAN Access - Speed problems.

Post by Vitaliy S. » Jul 27, 2011 10:42 am

Could you please clarify if you're comparing full runs (not incremental job runs) for both optimal compression and compression turned off?

As for CPU usage then it is expected to have high usage while using compression, however if CPU usage is maxed out (like in your case), then you should either use another level of compression, run less jobs at a time or choose more powerful server.

Gostev
SVP, Product Management
Posts: 24947
Liked: 3622 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Direct SAN Access - Speed problems.

Post by Gostev » Jul 27, 2011 11:00 am

Sounds like your storage is way too fast for this CPU to handle real-time compression of the data feed your storage is able to provide (over 400MB/s according to my math, curios what storage you are using btw).

You may still want to keep Optimal compression enabled though (despite it makes full backup take so long), because as you will find the difference in backup size will be pretty dramatic. Moreover, you will find that such a high load is only the case during full backup, while incremental backup runs will be significantly faster because they only process few changed blocks (not every block), so CPU load will be much lower.

Of course, if you want to have the best of both worlds (very fast backups with compression enabled), the best option here would be to use 8-12 core CPU server (or even 2 CPU server). Again, considering how much disk space you will be saving with Optimal compression enabled, such a powerful might be a very good investment, as the storage is generally much more expensive.

Post Reply

Who is online

Users browsing this forum: Majestic-12 [Bot], priet, ygerber and 49 guests