Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
stephenc
Influencer
Posts: 12
Liked: never
Joined: Jun 18, 2020 12:23 am
Full Name: SC
Contact:

Linux Restore Fail to Mount 04196282

Post by stephenc »

New to Veeam and trying to setup file level backup of a CentOS8 physical server. It's being backed up correctly via Linux agent, and I can initiate the restore from any of the restore points. However, nothing gets mounted at /mnt/backup. when I try to run xfs_repair -n /dev/loop0 I get "Bad primary superblock - bad magic number." If I understand this correctly, /dev/loop0 is just the restore data being presented as block device? Why would it have bad superblock? How can this be resolved?

Thanks.

Case 04196282
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Linux Restore Fail to Mount 04196282

Post by PTide »

Hi,
If I understand this correctly, /dev/loop0 is just the restore data being presented as block device?
That's right.
Why would it have bad superblock? How can this be resolved?
My guess is that since the filesystem in the blockdevice is not xfs (in fact, it's a tuned ext4), xfs_repair might not work quite well with it.

In regards to why it's empty - it's hard to tell without seeing the logs. Please wait for the support engineer to respond.

Meanwhile, I'd like to ask you what were the backup job options? (source, target, filesystem of the source)

Thanks!
stephenc
Influencer
Posts: 12
Liked: never
Joined: Jun 18, 2020 12:23 am
Full Name: SC
Contact:

Re: Linux Restore Fail to Mount 04196282

Post by stephenc »

Source is the physical Dell server running CentOS 8 with SAN volumes formatted with xfs.
Target is Veeam B&R server running on Windows 2019 server on VMWare 6.7 with local repository.
The backup goes through a Veeam proxy also on Win2019 and VMWare 6.7.

Thanks for the response!
stephenc
Influencer
Posts: 12
Liked: never
Joined: Jun 18, 2020 12:23 am
Full Name: SC
Contact:

Re: Linux Restore Fail to Mount 04196282

Post by stephenc »

Forgot to mention, the directory we're backing up in Linux is being used as SMB share. If this backup method fails to work, are there ways to back it up via smb share?

This is the exerpt from restore log that shows the mount failed (even though it says succeed at the end)
[17.06.2020 09:17:08.682] <139649028230144> lpbcore| Mounting
[17.06.2020 09:17:08.682] <139649028230144> lodev | Mount via veeammount
[17.06.2020 09:17:08.682] <139649028230144> | Running [/usr/sbin/veeammount]
[17.06.2020 09:17:08.682] <139649028230144> | Creating child process: /usr/sbin/veeammount with arguments: --log, veeammount.log, --mount, --device, /dev/loop0
, --point, /mnt/backup/, --syscall, false
[17.06.2020 09:17:08.735] <139649028230144> | Running [/usr/sbin/veeammount] Failed.
[17.06.2020 09:17:08.736] <139649028230144> lodev | Mount via veeammount Failed.
[17.06.2020 09:17:08.736] <139649028230144> lodev | WARN|Failed to mount: /dev/loop0
[17.06.2020 09:17:08.736] <139649028230144> lodev | >> |mount: /mnt/backup: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or ot
her error.
[17.06.2020 09:17:08.736] <139649028230144> lodev | >> |--tr:Child process has failed. Exit code: [242].
[17.06.2020 09:17:08.736] <139649028230144> lodev | >> |Mount failed
[17.06.2020 09:17:08.736] <139649028230144> lpbcore| Mounting ok.
[17.06.2020 09:17:08.736] <139649028230144> vmb | [SessionLog][info] Restore point has been mounted.
And from veeammount.log:
/////////////////////////////////////////
[17.06.2020 09:17:08.686] <140514660641728> vmnt | name /usr/sbin/veeammount
[17.06.2020 09:17:08.686] <140514660641728> vmnt | Mount
[17.06.2020 09:17:08.686] <140514660641728> vmnt | deviceName=/dev/loop0
[17.06.2020 09:17:08.686] <140514660641728> vmnt | mountPointName=/mnt/backup/
[17.06.2020 09:17:08.708] <140514660641728> lodev | Mount via command
[17.06.2020 09:17:08.708] <140514660641728> | Running [mount]
[17.06.2020 09:17:08.708] <140514660641728> | Creating child process: mou
nt with arguments: -t, ext4, /dev/loop0, /mnt/backup/
[17.06.2020 09:17:08.735] <140514660641728> | Running [mount] Failed.
[17.06.2020 09:17:08.735] <140514660641728> lodev | Mount via command Failed.
[17.06.2020 09:17:08.735] <140514660641728> vmnt | Mount Failed.
[17.06.2020 09:17:08.735] <140514660641728> vmnt | ERR |mount: /mnt/backup: wrong fs
type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, o
r other error.
[17.06.2020 09:17:08.735] <140514660641728> vmnt | >> |--tr:Child process has faile
d. Exit code: [32].
[17.06.2020 09:17:08.735] <140514660641728> vmnt | >> |--tr:Mounting device [/dev/l
oop0] on mount point [/mnt/backup/] failed.
[17.06.2020 09:17:08.735] <140514660641728> vmnt | >> |An exception was thrown from
thread [140514660641728].
[17.06.2020 15:22:28.836] <140047271431104> vmnt | /////////////////////////////////
/////////////////////////////////////////
Another thing we noticed is the size of /dev/loop0 is 120TB. The volume is 109TB and we are backing up only gigabytes of the volume. Is it normal to mount this much data during restore?
Disk /dev/loop0: 120 TiB, 131926394404864 bytes, 257668739072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Since you mentioned it's not xfs, I tried running fsck and it returned this:
# fsck /dev/loop0
fsck from util-linux 2.32.1
e2fsck 1.44.6 (5-Mar-2019)
ext2fs_open2: The ext2 superblock is corrupt
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: The ext2 superblock is corrupt while trying to open /dev/loop0
fsck.ext4: Trying to load superblock despite errors...
Superblock would have too many inodes (8052154368).

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Post Reply

Who is online

Users browsing this forum: No registered users and 19 guests