Comprehensive data protection for all workloads
Post Reply
hyvokar
Veteran
Posts: 406
Liked: 29 times
Joined: Nov 21, 2014 10:05 pm
Contact:

Reverse Incremental performance issues

Post by hyvokar »

Hi!

This topic has been quite popular in the past.
I run into some problems on our client's environment and could use some best practices to lower the backup time.

They have dedicated physical backup server running 2012r2 with plenty of cpu and ram.
They are doing reverse incremental backups, and they are not going to switch to forward incremental.

Backup performance is very poor. They have 30 fast sas-disks in raid5 on a HP smart-array 800-series controller with 4GB cache.
The backup speed is anything between 10-20MB/s. The bottleneck is the target.

In previous versions (pre-9) vbk file fragmentation was a big issue and frequent full backups was recommended. I guess in v9 this can be handled with backup maintenance / defragmentation?

Will reducing the number of restorepoints help (currently 21)?
Will splitting up the backup job in multiple smaller jobs help? I recall 1TB was maximum recommended vbk size. Is that correct?
The raid controllers cache is now set 10% read, 90% write. What would be ideal for reverse incremental backups?
Would enabling the sas-disks physical cache help?
What compression and dedupe settings would be easiest for storage? The backup server has 2x12core cpus and 64GB mem, so no need to worry about those.
Anything else to speed up the reverse incremental backups?



Thanks in advance :)
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Reverse Incremental performance issues

Post by foggy »

hyvokar wrote:In previous versions (pre-9) vbk file fragmentation was a big issue and frequent full backups was recommended. I guess in v9 this can be handled with backup maintenance / defragmentation?
Correct.

As to the performance, RAID10 is typically recommended for reverse incremental mode (less write penalty). Reducing the number of restore points will not help, however you can try enabling per-VM backup chains.
hyvokar wrote:I recall 1TB was maximum recommended vbk size. Is that correct?
You most likely mean the recommended VBK max size for Windows Dedupe repositories. There's no such a global limitation.
hyvokar wrote:What compression and dedupe settings would be easiest for storage? The backup server has 2x12core cpus and 64GB mem, so no need to worry about those.
If target is the bottleneck, these settings do not have much effect, you should pay attention to the storage itself. How it is added to Veeam B&R? CIFS presented to the backup server?
hyvokar
Veteran
Posts: 406
Liked: 29 times
Joined: Nov 21, 2014 10:05 pm
Contact:

Re: Reverse Incremental performance issues

Post by hyvokar »

foggy wrote:How it is added to Veeam B&R? CIFS presented to the backup server?
Hi!

These are local disks on the backup server (DAS).
--kari
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
foggy
Veeam Software
Posts: 21070
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Reverse Incremental performance issues

Post by foggy »

Then all compression and deduplication is performed by the backup server itself, which doesn't seem to be the bottleneck.
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: Reverse Incremental performance issues

Post by DaveWatkins »

Is it just one giant 30 disk RAID5 array? I would have thought that was pretty tough to expect the controller to calculate parity over 30 disks for every parity write. You might be better off with multiple smaller RAID5 arrays striped (RAID50) but that may also not buy you anything, it depends on how the card works underneath. RAID10 is probalby the real solution, it's just expensive when it comes to disk space

Don't enable the disks caches or you risk corrupting data in the event of a power failure, that's why your raid card has 4GB of cache and a battery, to protect against exactly that
Post Reply

Who is online

Users browsing this forum: dk-one, Google [Bot], Semrush [Bot] and 147 guests