Looking to upgrade our OLD Windows 2016 SOBR to a hardened Linux repository. Trying to make sure I understand the sizing calculator for CPU and memory - looking to get something from 45Drives with 15x 20TB spinning drives.....
Our inventory:
1 Hyper-V cluster with 3 hosts
3 stand-alone Hyper-V hosts
80 VMs - all Windows 2016, Windows 10, or newer guests
7 total jobs - most with 8-10 VMs per job, one job with 30 VMs
10Gb network
In reading the Task Limitations for Backup Repositories and System Requirements for Backup Repository Server, with 2 tasks per core, 10 core CPU = 20 tasks. 20 tasks at 500mb each + 4GB is 14GB RAM.
Do I have that right? Would I wind up maxing out the storage I/O before even getting to 20 tasks?
I also saw https://www.veeam.com/blog/hardened-lin ... tices.html and it says In most cases one can save time by simply getting around two CPUs with 16-24 cores each and 128 GB RAM For the high-density servers with around 60 or more disks, most vendors put in 192–256 GB RAM. So I'm wondering which applies to me.
-
- Enthusiast
- Posts: 82
- Liked: 3 times
- Joined: May 06, 2015 10:57 pm
- Full Name: Mark Valpreda
- Contact:
-
- Chief Product Officer
- Posts: 32223
- Liked: 7590 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: CPU and memory calculations for hardened XFS repository
You have a very small environment so you can basically stay with the minimal system requirements for all components.
However, I would not use 500MB as this is absolute bare minimum that does not even cover all use cases. For example, during instant recovery there's 1GB cache per each disk, which is twice as much!
I like using 4GB + 4GB per each concurrently processed machine for estimations (as per the System Requirements in the Release Notes document). I would not do less than 32GB in any case just because RAM is too cheap, while running out of RAM is consistently the top reason for backup/restore reliability problems in support. Preferably even 64GB not to worry about this in future, as Veeam develops new features and capabilities which might require more resources... I mean, I have 64GB in my fresh home workstation for this reason: not to worry about it 5 years later if I need to work with something RAM-hungry (as the difference with 32GB was laughable).
However, I would not use 500MB as this is absolute bare minimum that does not even cover all use cases. For example, during instant recovery there's 1GB cache per each disk, which is twice as much!
I like using 4GB + 4GB per each concurrently processed machine for estimations (as per the System Requirements in the Release Notes document). I would not do less than 32GB in any case just because RAM is too cheap, while running out of RAM is consistently the top reason for backup/restore reliability problems in support. Preferably even 64GB not to worry about this in future, as Veeam develops new features and capabilities which might require more resources... I mean, I have 64GB in my fresh home workstation for this reason: not to worry about it 5 years later if I need to work with something RAM-hungry (as the difference with 32GB was laughable).
-
- Enthusiast
- Posts: 82
- Liked: 3 times
- Joined: May 06, 2015 10:57 pm
- Full Name: Mark Valpreda
- Contact:
Re: CPU and memory calculations for hardened XFS repository
I figured this was small in the grand scheme of things.
I think I did get a quote for the machine with 32GB of RAM, but could get that adjusted to 64GB.
Single 10-core CPU should be okay in this use case?

I think I did get a quote for the machine with 32GB of RAM, but could get that adjusted to 64GB.
Single 10-core CPU should be okay in this use case?
-
- Chief Product Officer
- Posts: 32223
- Liked: 7590 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: CPU and memory calculations for hardened XFS repository
Totally, CPU is almost never a bottleneck for repositories.
Who is online
Users browsing this forum: Baidu [Spider], Google [Bot], saeluiss, Semrush [Bot] and 173 guests