-
- Enthusiast
- Posts: 37
- Liked: 4 times
- Joined: Mar 09, 2021 7:53 am
- Contact:
Veeam agent for Linux and multithreading
Hello,
I have a performance issue while using VEEAM agent 4.0 and Incremental backups (CBT) on a RHEL 7 baremetal server.
I performed an active full backup of my LVM logical volume (5TB) , i get 125 MB/sec (reaching network 1GB/sec bottleneck) which is what i was expected.
I had to restart my server (which means unload the veeamsnap module)
The next day i've performed an incremental backup of the same LVM logical volume but i'm getting around 30 MB/sec
Since i've lost the CBT information while rebooting, I was expecting Veeam to scan all blocks from my LVM logical volume to "recreate" the index of blocks that needs to be transfered to backup repository but i was not excepting that slowness.
When i look at my storage bay, i see low IOPS usage ( Low IOPS on source logical volume, and also low IOPS on target backup repository )
I check on my RHEL 7 servers and there was also low resource usage except for CPU.
A single CPU is used at 100% by a process named "veeamdeferio" while the 5 cores remaining seems not working.
Does VAL support multithreading? is there a way to increase the number of cores used by the backup jobs (the VAL is managed by VBR console, not standalone)
If no, the fact to disable SSL between VAL and Backup proxy can give better results?
Thanks,
I have a performance issue while using VEEAM agent 4.0 and Incremental backups (CBT) on a RHEL 7 baremetal server.
I performed an active full backup of my LVM logical volume (5TB) , i get 125 MB/sec (reaching network 1GB/sec bottleneck) which is what i was expected.
I had to restart my server (which means unload the veeamsnap module)
The next day i've performed an incremental backup of the same LVM logical volume but i'm getting around 30 MB/sec
Since i've lost the CBT information while rebooting, I was expecting Veeam to scan all blocks from my LVM logical volume to "recreate" the index of blocks that needs to be transfered to backup repository but i was not excepting that slowness.
When i look at my storage bay, i see low IOPS usage ( Low IOPS on source logical volume, and also low IOPS on target backup repository )
I check on my RHEL 7 servers and there was also low resource usage except for CPU.
A single CPU is used at 100% by a process named "veeamdeferio" while the 5 cores remaining seems not working.
Does VAL support multithreading? is there a way to increase the number of cores used by the backup jobs (the VAL is managed by VBR console, not standalone)
If no, the fact to disable SSL between VAL and Backup proxy can give better results?
Thanks,
-
- Product Manager
- Posts: 14837
- Liked: 3084 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam agent for Linux and multithreading
Hello,
Best regards,
Hannes
no - that's something we have on our radar, so we count your request +1Does VAL support multithreading?
assuming that you mean "backup repository" (because there is no backup proxy involved), then it's worth a try. It depends on the hardware. I would be interested in the result (and the hardware).If no, the fact to disable SSL between VAL and Backup proxy can give better results?
Best regards,
Hannes
-
- Enthusiast
- Posts: 37
- Liked: 4 times
- Joined: Mar 09, 2021 7:53 am
- Contact:
Re: Veeam agent for Linux and multithreading
Yes you can count on my request , thanks
For the SSL, i was meaning disable the "Network encryption" in Network rules (which affects the backup proxy i guess)
I will provide you the feedback for this change but first of all, i need to wait the active full to be completed
For the SSL, i was meaning disable the "Network encryption" in Network rules (which affects the backup proxy i guess)
I will provide you the feedback for this change but first of all, i need to wait the active full to be completed
-
- Enthusiast
- Posts: 37
- Liked: 4 times
- Joined: Mar 09, 2021 7:53 am
- Contact:
Re: Veeam agent for Linux and multithreading
Hi,
The SSL-disable trick didn't improved the situation.
We are still getting performance issue with VAL and CBT. After a big deletion on the filesystem , our backup throughput has decreased while veeamsnap was not unloaded (server was not rebooted)
VAL complains about the source bottleneck however.
When a block is changed, the CBT Map is updated on VAL host. Is the block size is the kernel one (4Kb) or a specific VAL block size ?
Is VAL multithreading is planned in the VAL 6.0 release?
Thanks,
The SSL-disable trick didn't improved the situation.
We are still getting performance issue with VAL and CBT. After a big deletion on the filesystem , our backup throughput has decreased while veeamsnap was not unloaded (server was not rebooted)
VAL complains about the source bottleneck however.
When a block is changed, the CBT Map is updated on VAL host. Is the block size is the kernel one (4Kb) or a specific VAL block size ?
Is VAL multithreading is planned in the VAL 6.0 release?
Thanks,
-
- Product Manager
- Posts: 14837
- Liked: 3084 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam agent for Linux and multithreading
Hello,
yes, SSL usually has only little impact on modern hardware.
There is no multi-threading planned in V6. For performance investigations, I recommend opening a support case (please post the case number for reference)
Best regards,
Hannes
yes, SSL usually has only little impact on modern hardware.
would it be possible to quantify that with numbers? How long does incremental backup normally take for the 5TB and how long did it take after the big deletion?We are still getting performance issue with VAL and CBT.
what does that mean in numbers? There is always a bottleneck. That's normal.VAL complains about the source bottleneck however.
There is no multi-threading planned in V6. For performance investigations, I recommend opening a support case (please post the case number for reference)
Best regards,
Hannes
-
- Enthusiast
- Posts: 37
- Liked: 4 times
- Joined: Mar 09, 2021 7:53 am
- Contact:
Re: Veeam agent for Linux and multithreading
Here are some information:
- Source data is located on a RAID6 of 12 disques NL-SAS 7.2kRPM.
- Source block device (the one that is backed up) is 35 TB (23.6 TB used on the filesystem) presented by LVM and formatted in ext4
- Incremental backup from VAL to VBR repository was taking between 1H an 2H in the last weeks. Since a few days it's taking between 10 hours. Backup speed is around 20 to 30 MB/sec
- I agree for the bottleneck, the clue of this sentence was to say the bottleneck was the source not to state the existence of a bottleneck.
Here is the stats from the VEEAM job of yesterday:
Duration : 17H02
Processing time : 20 MB/sec
Processed: 23.6 TB (100%)
Read: 1.2 TB
Transferred 1.1 TB (1.1x)
Load: 87% > proxy 8% > Network 13% > Target 15%
Nmon is showing a lot of WAIT on the the CPU however, our SAN storage doesn't show big usage on the raid group which stores exclusively the block device we are backing up (300 IOPS in READ while it can do better)
We'll open a case
i'm interested if you have the information about the block size used by VEEAM agent :
" When a block is changed, the CBT Map is updated on VAL host. Is the block size is the kernel one (4Kb) or a specific VAL block size ?"
- Source data is located on a RAID6 of 12 disques NL-SAS 7.2kRPM.
- Source block device (the one that is backed up) is 35 TB (23.6 TB used on the filesystem) presented by LVM and formatted in ext4
- Incremental backup from VAL to VBR repository was taking between 1H an 2H in the last weeks. Since a few days it's taking between 10 hours. Backup speed is around 20 to 30 MB/sec
- I agree for the bottleneck, the clue of this sentence was to say the bottleneck was the source not to state the existence of a bottleneck.
Here is the stats from the VEEAM job of yesterday:
Duration : 17H02
Processing time : 20 MB/sec
Processed: 23.6 TB (100%)
Read: 1.2 TB
Transferred 1.1 TB (1.1x)
Load: 87% > proxy 8% > Network 13% > Target 15%
Nmon is showing a lot of WAIT on the the CPU however, our SAN storage doesn't show big usage on the raid group which stores exclusively the block device we are backing up (300 IOPS in READ while it can do better)
We'll open a case
i'm interested if you have the information about the block size used by VEEAM agent :
" When a block is changed, the CBT Map is updated on VAL host. Is the block size is the kernel one (4Kb) or a specific VAL block size ?"
-
- Product Manager
- Posts: 14837
- Liked: 3084 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam agent for Linux and multithreading
that should be investigated by support. please post the support case number for reference1H an 2H in the last weeks. Since a few days it's taking between 10 hours.
yes we have the information. I remember a higher value than 4KB, but before investing time to search for it, I would like to know, how does that information would help you? If you like to change the software, the snapshot driver is open source. https://github.com/veeam/veeamsnapi'm interested if you have the information about the block size used by VEEAM agent
-
- Enthusiast
- Posts: 37
- Liked: 4 times
- Joined: Mar 09, 2021 7:53 am
- Contact:
Re: Veeam agent for Linux and multithreading
Hi,
Case is #05535877 , followed by a colleage in the same timezone than our project
I don't plan to change the software but i will review the github link if i get some time, thanks
Case is #05535877 , followed by a colleage in the same timezone than our project
I don't plan to change the software but i will review the github link if i get some time, thanks
Who is online
Users browsing this forum: No registered users and 5 guests