Comprehensive data protection for all workloads
Post Reply
Joanna
Lurker
Posts: 1
Liked: never
Joined: Feb 11, 2025 11:01 am
Contact:

Best Practices for Optimizing Veeam Backup Performance

Post by Joanna »

Hello everyone,

I’ve been using Veeam Backup & Replication for a while now, but I’ve been experiencing some performance issues, particularly with backup speed and storage efficiency. I’m hoping to get some insights from the community on best practices for optimizing backup performance.

Here are some details about my setup:
  • Veeam Version: [Specify your version]
  • Backup Mode: Forward Incremental / Reverse Incremental / Synthetic Full
  • Storage: [Type of storage – local NAS, SAN, cloud, etc.]
  • Network: [Network details – 1Gbps, 10Gbps, etc.]
  • Source VMs: [Number and size of VMs being backed up]
Some specific issues I’ve encountered:
  • Slow Backup Speeds – I’ve noticed that my incremental backups are taking longer than expected. Are there any particular tweaks (such as repository settings, transport modes, or proxy configurations) that can help speed things up?
  • Storage Usage Concerns – Even with deduplication and compression enabled, my storage consumption seems higher than expected. Are there specific settings or retention policies that have worked well for you to optimize storage use?
  • Best Practices for Synthetic Full Backups – I’m considering switching from active full to synthetic full backups to reduce impact on production workloads. What are the pros and cons, and is there a preferred schedule to follow?
If anyone has experience optimizing their Veeam setup or can point me toward relevant documentation, I’d really appreciate your input!

Thanks in advance for your help.

Regards
Joannalooker
david.domask
Veeam Software
Posts: 2597
Liked: 606 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Best Practices for Optimizing Veeam Backup Performance

Post by david.domask »

Hi Joanna, welcome to the forums.

Sorry to hear about the challenges with the backup performance and storage usage.

1. Backup Speeds

Can you share a few more details here? From the job statistics window, what is shown as the bottleneck for the most recent run? When we say "slow" here, can you put some numbers to it? How fast does it transfer according to the job statistics? Similarly, if you click on a VM in the job statistics window, you should see details on the proxy selection and what transport mode (Hotadd, NBD, SAN, NFS) was used.

I'm afraid we can't really suggest tweaks without knowing these details, as performance is very much so bound to the environment, so checking this first is the best bet.

2. Storage Usage concerns

The retention mode (Forward Incremental, Forever Forward Incremental, Reverse Incremental) doesn't have a huge impact on the individual backup file sizes, but rather how many backups you have. Forward Incremental requires periodic full backups (Active Full or Synthetic), and naturally additional full backups will require more space. If your repository supports Fast Clone, with Synthetic Fulls you will get "space-less" fulls due to Fast Clone.

But can you clarify if there are simply more backup files than you're expecting, or if the individual backups seem larger than expected to you? If it's the latter, can you explain a bit more why they seem too big?

3. Synthetic Fulls

Your idea makes sense and is the entire purpose of Synthetic Fulls. The schedule is entirely up to you and your requirements, but a common configuration is a weekly Synthetic Full. Since Synthetic Fulls shift the workload to the repository, it's worth testing first with a job to get an idea of the performance that your storage can offer. You can do a sample test with Diskspd to get an idea of how fast your storage can handle the Synthetic Full IO pattern. Please note however that the recommended architecture is to use a Fast Clone capable repository (XFS or ReFS), as this greatly improves Synthetic Full performance.
David Domask | Product Management: Principal Analyst
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], tdewin and 171 guests