Last week I had contact (case #00436398) with VeeAm support due the fact that our VeeAm console server was experiencing some out of memory issues. The cause was the use of a job for each VM which started in 1 minute intervals between 0:55 and 3:00, since the backups couldn't be written to the target backup server quickly enough we had quite some jobs waiting for a proxy to become available. As each job starts a Veeam.Backup.Manager.exe which uses around 115 MB memory this turned bad pretty quick.
As the problem was two-fold I've been slowly changing the backups from "single VM reverse incremental" to "10 VM Incremental with synthetic full (forever-incremental)" jobs. I'm seeing a positive effect on backup times and the target backup server loads so it seems to be doing what was expected but I would love some insights from other users.
Our current setup comes down to:
VeeAm console server (also proxy in San mode): Quad core Xeon, 8 GB ram, simple RAID-1 with 500GB 7200 rpm drives
Backup server: Quad core Xeon, 16 GB ram, Areca 1880 controller with 1GB cache+BBU, RAID-6 with 15x 2T 5900 rpm drives + 1 hot spare (will be 23x 2T 5900 rpm drives + 1 hot spare soon)
Storage: EqualLogic PS6100XV (RAID-10 with 22x 15.000 rpm 600 GB drives + 2 hot spare)
VMs: 166 VM's with 6.4 TB of disk file
Former Job settings: 28 days retention, Reversed incremental, Optimal compression, LAN target optimization, daily jobs
New Job settings: 28 days retention, forever-incremental (weekly synthetic full), Optimal compression, LAN target optimization, daily jobs
With this change I should have the following trade offs:
1) The daily IOPS with Reverse incremental (1x read, 2x write during backup) are replaced with just streamed writes which are handled much better on our backup server.
2) The weekly IOPS are a bit higher as a synthetic full operation requires 2x read, 2x write
3) The factor <VM size>:<Backup size for 28 days retention> would change from 1:1,5 to approx 1:3-4
4) The total IOps needed for a week are half of what was needed before and are much more focussed on writes (70% less reads) which profiles well with our backup server (*)
(*) To explain how I got to this we will assume we need "1000 IOPS" for our backups. We know from the VeeAm documents that we will need 1 read : 2 writes with reversed incrementals, with 7 backups that would be 7.000 reads and 14.000 writes. For incremental backups we don't need to read the old files, so we have our regular incrementals (6x 1000 writes) with the synthetic full (1x 2000 reads + 1x 2000 writes) which totals in 8000 writes + 2000 reads
As I'm very curious if others ran into this and what they did:
What are your performance experiences, how did you analyse and what action have you taken to optimize your VeeAm setup.