Backup of enterprise applications (Microsoft stack, IBM Db2, MongoDB, Oracle, PostgreSQL, SAP)
Post Reply
mdiver
Veeam Legend
Posts: 238
Liked: 39 times
Joined: Nov 04, 2009 2:08 pm
Contact:

Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by mdiver »

In a customer project the HANA plugin backups have to go to a DataDomain system (non-FC, setup according to Dell guidelines -> decompress before storing, etc.).

According to https://bp.veeam.com/vbr/2_Design_Struc ... e_Plugins/ one should size 1CPU/1GB RAM per enterprise plugin channel.

Our single Windows 2022 gateway VM has 12 vCPUs and 16GB RAM and even a single plugin job backing up a ~4TB HANA DB using 6 channels already uses >70% of the hosts CPU. With those 12 CPUs we should be able to accommodate 10x more channels (~60).

Any ideas why CPU usage is that high?
It prevents us from scaling the solution.

Thanks,
Mike
PetrM
Veeam Software
Posts: 3812
Liked: 643 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by PetrM »

Hello,

My only hypothesis is that decompression consumes CPU. I will discuss this with the team to explore the possibility of influencing this process through plug-in configuration fine-tuning. Additionally, are you encountering similar issues with other workloads, such as VM or physical machine backups?

Thanks!
mdiver
Veeam Legend
Posts: 238
Liked: 39 times
Joined: Nov 04, 2009 2:08 pm
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by mdiver »

Thanks for your reply, Petr.

Currently we only have a single HANA plugin and two Linux agents backing up to the PoC VBR server in this customers environment.

For the agent backups we did not observe anything unusual. But here - as usual with Linux agents - we only see a single datamover on the gateway per agent (not a parallel task as with Windows server agents).

The HANA plugin with 6 channels showed up with 6 datamovers (veeamagent.exe) on the gateway each consuming ~12% of total CPU (~12% of the total CPU ressources the VM has -> ~72% for all 6 channels) for the complete runtime of the full DB backup (~3,5h for ~4TB).
Could DDboost interfere here? It is also running on the gateway as far as I understand.

Thanks,
Mike
PetrM
Veeam Software
Posts: 3812
Liked: 643 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by PetrM »

Hi Mike,

Let's try to disable compression on the source side in the config /opt/veeam/VeeamPluginforSAPHANA/veeam_config.xml:
<AgentParams compression="NoCompression"/>
(<AgentParams /> node is empty by default)

Probably, it will allow us to save resources on decompression performed on the gateway server. If you don't see improvements after that, please open a support case and share a support case ID with us so that we can make sure it's on a right track.

Thanks!
Andreas Neufert
VP, Product Management
Posts: 7202
Liked: 1547 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by Andreas Neufert » 1 person likes this post

To add some additional background information.

CPU consumption depends usually on total throughput and is only limited influenced by the number of channels. Excessive channel usage of cause cause more overhead, but this is not the case here.

Regular backup planning is for situations where you do not uncompress the data before storing the data. For your dedup storage, you need to plan for additional CPU resources on your gateway server compared with a regular repository or SMB gateway because of the uncompression task.

To write compressed data into a dedup device is not good as the dedup engine gets only unique data and can not do the job.
Now you have 2 options.
1) Deactivate compression on the Source (see above)
2) Use the CPU resources on the Gateway. Remember that in any case, we try to consume as much CPU resources on the Gateway as we can to speed up the processing (and to not be the bottleneck there).

Use 1) if network will not become the bottleneck (remember compressed data is only 1/2 of the data to transport over the network).
Use 2) if you have plenty network bandwidth for the overall processing.
mdiver
Veeam Legend
Posts: 238
Liked: 39 times
Joined: Nov 04, 2009 2:08 pm
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by mdiver »

Thanks Petr and Andy.

All that sounds totally reasonable. The throughput, even with the currently low number of channels (6), was quite high (~5Gbit/s).
So this could explain the high CPU consumption according to what Andreas wrote.

But wouldn't then the recommendation to have 5 channels per CPU always lead to CPU wise constrained systems? If can already 50% saturate the bandwidth with 1.2 CPUs (5+1 channel), why would it make sense to have more than 3 CPUs in a repo/gateway with a 10Gbit interface?

We do not write compressed date into the dedupe. Of course, we have the setting active to uncompress before storage within the repo.

Depending on the settings in global.ini and veeam_config.xml, we might compress on the plugin side.
So, transfer from the HANA system to the gateway would be compressed in transit only.

The plugin runs in managed mode. There is no way to control the setting from within the Veeam console I think.
Only the # of channels can be adjusted in the "SAP HANA" tab of the backup policies advanced settings.
Is compression switched on in the plugin per default?

The PoC is currently in the decision phase with the customer.
I'll check and report back the values without this in transit compression once I get back to the customer in the implementation phase.
Andreas Neufert
VP, Product Management
Posts: 7202
Liked: 1547 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by Andreas Neufert » 1 person likes this post

It really depends on the CPU consumption on the Repository. If the throughput is low or the Uncompression is not enabled on the repository, then the CPU just need to handle the data transport and some checksum generation, which means the 5 channels per CPU core is OKisch. Remember that best practices is to land primary backup on a non dedup storage system and the best practices and sizing recommendation are written that way. When you change the best practices you need to change the hardware setup accordingly.

Now that you perform uncompression, the CPU consumption depends on overall throughput not the number of channels. Do you mean 5Gbit/s or 5GByte/s?

As I am out of the office I will ask someone else to comment about how to disable compression on the HANA server in the Veeam config file.
Thanks.
PetrM
Veeam Software
Posts: 3812
Liked: 643 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Plugin backup to DataDomain - very high CPU consumption on gateway server

Post by PetrM » 1 person likes this post

Hello,

Despite the fact that the plug-in works in the managed mode, the method to disable compression in veeam_config.xml that I shared above is still relevant. I suggest trying to see if the CPU consumption decreases.

Thanks!
Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests