Hallo,
on one System Veeam wrongly detects /dev/sda as RAID and is therefore not able to backup the disk.
In "Known Issues", there is a problem mentioned, with partitions crated with cfdisk versions older than 2.25.
I don't know with what tool the partitions were created on the system.
Are there more informations about the cfdisk problem and how to detect it and fix it without reinstalling the System?
Or are the other problems known which can cause Veeam to detect a disk as RAID?
I'm using Veeam Agent for Linux FREE v2.0.1.665 on Debian GNU/Linux 9 (stretch).
lpbcore| Found matching filter for object [DEV__dev_sda]. Record type: [Include], value: [DEV__dev_sda]
lpbcore| Device [/dev/sda] has usage type [raid] and will be skipped.
lpbcore| No matching filters for object [{18bd6f83-2c39-4789-b432-76e6cebc92fd}]
lpbcore| No matching filters for object [{18bd6f83-2c39-4789-b432-76e6cebc92fd}/o1048576l2995727106048]
lpbcore| Partition [/dev/sdb1] with index [1] should not be backed up.
lpbcore| Detecting bootloader on device [/dev/sdb].
lpbcore| No bootloader detected.
lpbcore| No bootloader has been detected on device.
lpbcore| Enumerating LVM volume groups...
lpbcore| LVM volume group: [pve].
lpbcore| Enumerating logical volumes for LVM volume group: [pve].
lpbcore| Found matching filter for object [7tVKDf-QOYf-eoc4-YkXR-jRYU-27Pd-OKFPfy]. Record type: [Include], value: [7tVKDf-QOYf-eoc4-YkXR-jRYU-27Pd-OKFPfy]
lpbcore| Found matching filter for object [7tVKDf-QOYf-eoc4-YkXR-jRYU-27Pd-OKFPfy/QRMzBr-PVpd-753U-4cRz-uWX6-4XcF-6Lbfkw]. Record type: [Include], value: [7tVKDf-QOYf-eoc4-YkXR-jRYU-27Pd-OKFPfy]
lpbcore| No matching filters for object [7tVKDf-QOYf-eoc4-YkXR-jRYU-27Pd-OKFPfy/QRMzBr-PVpd-753U-4cRz-uWX6-4XcF-6Lbfkw]
lpbcore| Logical volume [vm-101-disk-1] with ID [QRMzBr-PVpd-753U-4cRz-uWX6-4XcF-6Lbfkw] should be backed up.
lpbcore| [1] LVM volume groups were detected.
lpbcore| LVM volume group [pve] will be backed up.
vmb | [SessionLog][error] Unable to backup object: sda(not found).
lpbcore| Enumerating backup objects. Failed.
lpbcore| BackupJobPerformer: Creating backup. Failed.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 952G 0 disk
|-sda1 8:1 1 285M 0 part /boot
|-sda2 8:2 1 7.5G 0 part [SWAP]
|-sda3 8:3 1 46.6G 0 part /
|-sda4 8:4 1 1K 0 part
`-sda5 8:5 1 897.7G 0 part
`-pve-vm--101--disk--1 253:0 0 307G 0 lvm
sdb 8:16 1 2.7T 0 disk
`-sdb1 8:17 1 2.7T 0 part /backup
mdadm -E /dev/sda4
/dev/sda4:
MBR Magic : aa55
Partition[0] : 1882617856 sectors at 2 (type 8e)
mdadm -E /dev/sda5
mdadm: No md superblock detected on /dev/sda5
Disk /dev/sda: 952 GiB, 1022202216448 bytes, 1996488704 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00009263
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 585727 583680 285M 83 Linux
/dev/sda2 585728 16209919 15624192 7.5G 82 Linux swap / Solaris
/dev/sda3 16209920 113866751 97656832 46.6G 83 Linux
/dev/sda4 113868798 1996486655 1882617858 897.7G 5 Extended
/dev/sda5 113868800 1996486655 1882617856 897.7G 8e Linux LVM
In the Veeam Gui > Configure Backup mode Volume level "Choose volumes to backup" veeam only shows the LVM volumes, sda (without partitions) and sdb with the partition sdb1.
Thanks
Since you have a HW RAID, mdadm won't find any remaining meta on the disks. I'd suggest to try using Adaptec utility.
Also, have you tried to select partitions separately: sda1, sda2,sda3?
I had tested the raw device behind the controller (/dev/sgx) as well, there was nothing detected.
But Veaam should only be effect by information accessible via /dev/sda(X).
I had tested the raw device behind the controller (/dev/sgx) as well, there was nothing detected.
This is by design.
And in Veeam I can only select sda and not sda1.
Have you tried adding sda or its partitions via CLI? In either case please contact our support team directly as the described behaviour is unexpected and has to be investigated so the issue can be fixed.
Hallo,
the problem was that lsblk -f had classified the extended partition (without file system) as drbd because there were some old drbd metadata on /dev/sda.
If some has the same problem, the solution was to overwrite the metadata with wipefs (be careful not to wipe data you still need).
That's interesting... Thanks for sharing! I was already about to follow up as it's been almost two weeks It's weird that the agent recognized drbd meta remnants as raid though, we should take a closer look into that.
Also I'd like to ask a few questions regarding DRBD - was that your setup? If so then did you use DRBD in production?