Salesforce backup and restore for data, files and metadata
Post Reply
lowlander
Service Provider
Posts: 453
Liked: 30 times
Joined: Dec 28, 2014 11:48 am
Location: The Netherlands
Contact:

Scale-up or scale-out regarding performance

Post by lowlander »

As I understand now, there are two servers required for backing up Salesforce:
- Management server
- Database (PSQL)

The database server (role) protects Salesforce organizations data
The management server (role) protects files and metadata

The backup proces is based on API's available from the salesforce platform.

How can we increase required performance to backup ?

When combining these roles in one server can we consider this solution as a scale-up solution? (adding more CPU and memory to increase performance) OR is there a scale out principle we should follow ?
MIvanov
VP, Product Management
Posts: 276
Liked: 77 times
Joined: Dec 12, 2008 2:39 pm
Full Name: Maxim
Contact:

Re: Scale-up or scale-out regarding performance

Post by MIvanov »

@lowlander With current version, scaling is only UP at the moment and it is limited to what Salesforce API limits allow.
Each job is a separate process, so you can parallel them and play with the schedule. You can assign objects to different schedules while Metadata and Files are on the main schedule.
On the management server, you need to look after CPU/Memory for job starts with 4 CPUs as a minimum.
On DB serverthe first priority for tuning would be IOPS and Memory, in my experience. The impact grows over time when more data is coming and indexes are rebuilt.

In future, there will be a possibility to scale out workers as containers for one management server.

Multi-tenant
Currently you are limited by the license and have to install one deployment per Tenant as a Subscription license. If those Tenants are not huge you can, use a single VM deployments.

In future, there will be a way to run multi-tenant license as soon as we get green light from the business team.
MIvanov
VP, Product Management
Posts: 276
Liked: 77 times
Joined: Dec 12, 2008 2:39 pm
Full Name: Maxim
Contact:

Re: Scale-up or scale-out regarding performance

Post by MIvanov »

For multi-tenant, you can of course have 1 shared PGSQL server and each Salesforce organization will use its own database.
lowlander
Service Provider
Posts: 453
Liked: 30 times
Joined: Dec 28, 2014 11:48 am
Location: The Netherlands
Contact:

Re: Scale-up or scale-out regarding performance

Post by lowlander »

Thanks for the explanation Maxim. We reviewing possible boundaries/limitations for VBfS vs the size of an Salesforce organization regarding to the backup and restore performance. Are there any guidelines (in detail or global) to determine these boundaries/limitations. For example an environment of 20k users with 20TiB of data is doable with a single server configuration? #Feature request to build a VBfS sizing tool like VBR :)
MIvanov
VP, Product Management
Posts: 276
Liked: 77 times
Joined: Dec 12, 2008 2:39 pm
Full Name: Maxim
Contact:

Re: Scale-up or scale-out regarding performance

Post by MIvanov »

@lowlander 20TB for one Salesforce instance is lot, this is big. I would plan a dedicated VM for the database with 32GB RAM, 4 CPU at least. You can try with a large single server, but over time you might run into memory/cpu limitations and moving existing DB to a dedicated instance will be a hassle. I would love to know the final configuration you go with and what was the performance for that org .

+1 for the sizing tool, we will consider that.
Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest