Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
aj_potc
Expert
Posts: 138
Liked: 33 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Problem with veeammount causing big delays?

Post by aj_potc »

Case #: 04629941

Hi there,

I've got a CentOS 8 system set up with software (md) RAID with four disks. I'm using B&R on a Windows server to manage the backups.

My initial backup didn't report any issues via B&R, but I noticed that incrementals seemed much slower than I expected (nearly two hours on a system with minimal changes).

I've found errors in the Linux agent logs related to the veeammount process. I suspect this is causing big delays in the backups of this system, as Veeam is attempting to mount multiple devices that (apparently) are not intended to be mounted.

Here's an excerpt from the logs:

Code: Select all

[05.02.2021 20:40:36.713] vmnt | //////////////////////////////////////////////////////////////////////////
[05.02.2021 20:40:36.725] vmnt | name /usr/sbin/veeammount
[05.02.2021 20:40:36.725] vmnt | Mount
[05.02.2021 20:40:36.725] vmnt | deviceName=/dev/veeamimage0
[05.02.2021 20:40:36.725] vmnt | mountPointName=/tmp/{213ceac1-20b8-461a-9071-557c8b30b3c5}/
[05.02.2021 20:40:36.727] vmnt | Mount Failed.
[05.02.2021 20:40:36.727] vmnt | ERR |Value "TYPE" not found in probe.
[05.02.2021 20:40:36.727] vmnt | >> |Failed to get filesystem of device [/dev/veeamimage0].
[05.02.2021 20:40:36.727] vmnt | >> |--tr:Mounting device [/dev/veeamimage0] on mount point [/tmp/{213ceac1-20b8-461a-9071-557c8b30b3c5}/] failed.
[05.02.2021 20:40:36.727] vmnt | >> |An exception was thrown from thread [140555454233536].

These messages appear several times during the backup process. The system seems to wait 10-30 minutes after each error before it moves on to the next device (/dev/veeamimage[0 through 5]). Maybe this is some kind of timeout? The system appears completely idle during these periods.

Eventually, the backup does finish successfully according to the B&R server. But the backup process takes much longer than it should, and it's sitting idle most of the time.

It appears to me that the system is trying to mount devices that are not mountable.

Here's the output from lsblk that might be of interest. These veeamimage devices are listed after my four sd[a-d] devices:

Code: Select all

veeamimage0 253:0    1    1M  0 disk
veeamimage1 253:1    1 1023M  0 disk
veeamimage2 253:2    1    1M  0 disk
veeamimage3 253:3    1  7.3T  0 disk
veeamimage4 253:4    1    1M  0 disk
veeamimage5 253:5    1    1M  0 disk
Veeam seems to get "stuck" on mounting devices 0, 1, 3, and 4.

Does anyone know if these veeammount errors are normal? Or should I be looking elsewhere for troubleshooting the backup delays?

Thank you very much for any thoughts.
aj_potc
Expert
Posts: 138
Liked: 33 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by aj_potc »

I thought I would give a few more details after capturing some statistics from a weekend incremental backup job, which ran for over two hours.

Just as a data point, the first full backup job of 2.6 TB ran at network line speed and took about 6 hours.

Here are the general statistics:

Code: Select all

Sunday, February 7, 2021 2:00:04 AM
Success	1	Start time	2:00:04 AM	Total size	2.6 TB	Backup size	58.6 MB	 
Warning	0	End time	4:05:52 AM	Data read	15 GB	Dedupe	1.0x
Error	0	Duration	2:05:47	Transferred	58.6 MB	Compression	1.5x

Here are the job progress details:

Code: Select all

2/7/2021 2:00:12 AM :: Total size: 21.8 TB (2.6 TB used)  
2/7/2021 2:00:16 AM :: Backup file will be encrypted  
2/7/2021 2:00:17 AM :: Queued for processing at 2/7/2021 2:00:17 AM  
2/7/2021 2:00:17 AM :: Required backup infrastructure resources have been assigned  
2/7/2021 2:00:31 AM :: Network traffic will be encrypted  
2/7/2021 2:00:37 AM :: Preparing to backup  
2/7/2021 2:00:37 AM :: Creating volume snapshot  
2/7/2021 2:00:39 AM :: Starting incremental backup to [backup_server]  
2/7/2021 2:00:43 AM :: File system indexing is disabled  
2/7/2021 2:01:00 AM :: Backed up md126 120.8 MB at 52.6 KB/s  
2/7/2021 2:40:12 AM :: Backing up BIOS bootloader on /dev/sdc  
2/7/2021 2:40:15 AM :: Backed up sdc 344 KB at 289 B/s
2/7/2021 3:00:31 AM :: Backed up md127 14.9 GB at 416.3 MB/s  
2/7/2021 3:01:09 AM :: Backing up BIOS bootloader on /dev/sdb  
2/7/2021 3:01:11 AM :: Backed up sdb 344 KB at 273 B/s  
2/7/2021 3:22:39 AM :: Backed up md125 2.5 MB at 288.9 KB/s  
2/7/2021 3:22:49 AM :: Backing up BIOS bootloader on /dev/sdd  
2/7/2021 3:22:51 AM :: Backed up sdd 344 KB at 281 B/s  
2/7/2021 3:43:43 AM :: Backing up BIOS bootloader on /dev/sda  
2/7/2021 3:43:46 AM :: Backed up sda 344 KB at 298 B/s
2/7/2021 4:03:33 AM :: Backing up summary.xml  
2/7/2021 4:03:37 AM :: Releasing snapshot  
2/7/2021 4:05:47 AM :: Network traffic verification detected no corrupted blocks  
2/7/2021 4:05:47 AM :: Processing finished at 2/7/2021 4:05:47 AM  
I find the pauses after Backed up sd[a-d] to be a possible smoking gun, especially considering that each one lasted almost exactly 20 minutes, during which time nothing seems to be happening.

Only the backups of the two md devices (md126 and md127) appear to actually do anything productive. This makes sense, as these are the actual storage devices presented to the OS.

I hope this helps a little. Thanks again for any help!
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by PTide »

Hi,

What objects did you pick as a backup source? Mounpoints, raw disks, RAID devices? What about the support team - have they found anyhting yet?

Thanks!
aj_potc
Expert
Posts: 138
Liked: 33 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by aj_potc »

Thanks for your kind reply, @PTide.

I chose the entire system to be backed up. I didn't choose any specific devices.

I have provided a number of (hopefully useful) logs to support from both the agent and from the B&R server, but unfortunately they have not replied.

Are there known issues with backing up md RAID devices?

Thanks again!
aj_potc
Expert
Posts: 138
Liked: 33 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by aj_potc »

An update:

After not getting any response from Veeam support, I decided to try changing the backup source. Instead of choosing "Entire computer," I've specified two md devices directly:

/dev/md125 (the /boot partition)
/dev/md126 (the root partition)

This cuts the incremental backup time pretty significantly.

This change seems to prevent veeammount from hitting any errors -- no "Mount Failed" errors are displayed in veeammount.log, as they were before. I suspected that those mount errors were part of the reason for my backup issues.

Am I on the right track by using volume-based backups instead of "entire computer" for md RAID systems? Will I still be able to do bare metal restores using this backup method? I just want to be sure that my Veeam backups will be complete. I do plan to test this, but it doesn't hurt to ask first!

Thanks for any feedback or tips.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by PTide »

Given the circumstances - yes, are on the right track.
However, I would expect the Agent to back up the Entire machine within the same amount of time as it does it you explicitly select all md volumes.

Would you post your lsblk -af, please?

Thanks!
aj_potc
Expert
Posts: 138
Liked: 33 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem with veeammount causing big delays?

Post by aj_potc »

Thanks again for your help. Here's the lsblk output you requested:

Code: Select all

NAME      FSTYPE            LABEL                      UUID                                 MOUNTPOINT
loop0
loop1
sda
├─sda1
├─sda2    linux_raid_member [hostname]:swap ac384d68-ab80-b736-0f01-c5cdb4ac920e
│ └─md127 swap                                         bc9e71c5-d790-4bd3-a21e-2152746bb6b4 [SWAP]
├─sda3    linux_raid_member [hostname]:boot 5c3e45af-b567-e0c1-8faa-dee803284f92
│ └─md125 xfs                                          f1293733-da16-4875-afce-37bccb00a012 /boot
└─sda4    linux_raid_member [hostname]:root 5b911e90-58d0-26f2-0711-3ceea6868b16
  └─md126 xfs                                          f115c75c-076d-4069-916b-cf3b59c453ea /
sdb
├─sdb1
├─sdb2    linux_raid_member [hostname]:swap ac384d68-ab80-b736-0f01-c5cdb4ac920e
│ └─md127 swap                                         bc9e71c5-d790-4bd3-a21e-2152746bb6b4 [SWAP]
├─sdb3    linux_raid_member [hostname]:boot 5c3e45af-b567-e0c1-8faa-dee803284f92
│ └─md125 xfs                                          f1293733-da16-4875-afce-37bccb00a012 /boot
└─sdb4    linux_raid_member [hostname]:root 5b911e90-58d0-26f2-0711-3ceea6868b16
  └─md126 xfs                                          f115c75c-076d-4069-916b-cf3b59c453ea /
sdc
├─sdc1
├─sdc2    linux_raid_member [hostname]:swap ac384d68-ab80-b736-0f01-c5cdb4ac920e
│ └─md127 swap                                         bc9e71c5-d790-4bd3-a21e-2152746bb6b4 [SWAP]
├─sdc3    linux_raid_member [hostname]:boot 5c3e45af-b567-e0c1-8faa-dee803284f92
│ └─md125 xfs                                          f1293733-da16-4875-afce-37bccb00a012 /boot
└─sdc4    linux_raid_member [hostname]:root 5b911e90-58d0-26f2-0711-3ceea6868b16
  └─md126 xfs                                          f115c75c-076d-4069-916b-cf3b59c453ea /
sdd
├─sdd1
├─sdd2    linux_raid_member [hostname]:swap ac384d68-ab80-b736-0f01-c5cdb4ac920e
│ └─md127 swap                                         bc9e71c5-d790-4bd3-a21e-2152746bb6b4 [SWAP]
├─sdd3    linux_raid_member [hostname]m:boot 5c3e45af-b567-e0c1-8faa-dee803284f92
│ └─md125 xfs                                          f1293733-da16-4875-afce-37bccb00a012 /boot
└─sdd4    linux_raid_member [hostname]:root 5b911e90-58d0-26f2-0711-3ceea6868b16
  └─md126 xfs                                          f115c75c-076d-4069-916b-cf3b59c453ea /
Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests