Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
markhensler
Service Provider
Posts: 44
Liked: 4 times
Joined: Mar 10, 2015 6:16 pm
Full Name: Mark Hensler
Contact:

Backing up a physical Vyatta

Post by markhensler »

Has anyone tried to backup a physical Vyatta? Either the Brocade Vyatta or the AT&T Vyatta?

It appears to be based on Debian Linux, but they've obviously made changes. I'm hoping someone has gotten this to work and is willing to share their notes.

Thanks in advance!
markhensler
Service Provider
Posts: 44
Liked: 4 times
Joined: Mar 10, 2015 6:16 pm
Full Name: Mark Hensler
Contact:

Re: Backing up a physical Vyatta

Post by markhensler »

I will start by admitting that Veeam Support told me that Vyattas are not supported. Throwing caution to the wind, I made an attempt anyway (on a non-production Vyatta). I successfully installed the Veeam Agent for Linux; however, I am unable to get a successful backup. I will share my notes below.

Manual Installation of VAL on Vyatta (AT&T vRouter) 5600 went just like Veeam docs instruct for Debian systems:

Code: Select all

root@REDACTED:/root# apt-get install lynx
root@REDACTED:/root# lynx https://www.veeam.com/download_add_packs/backup-agent-linux/deb-64
root@REDACTED:/root# dpkg -i veeam-release-deb_1.0.7_amd64.deb
root@REDACTED:/root# apt-get update
root@REDACTED:/root# apt-get install veeam
root@REDACTED:/root# veeamconfig license install --server --path veeam_agents.lic
Create login on Vyatta:

Code: Select all

configure
set system login user veeam authentication plaintext-password 'REDACTED'
set system login user veeam authentication public-keys veeam@veeam-server1 key 'REDACTED'
set system login user veeam authentication public-keys veeam@veeam-server1 type 'ssh-rsa'
set system login user veeam level 'admin'
commit comment "add login veeam"
exit
Add the "veeam" user to the "vyattasu" group in /etc/group
Change the shell for the "veeam" user from "vbash" to "bash"
Add firewall exceptions where required.
Add the new "veeam" credential to the Veeam console, enable privilege escalation.

Log excerpt containing system info:

Code: Select all

[30.10.2019 14:08:23] <140405615807424> lpb    | {
[30.10.2019 14:08:23] <140405615807424> lpb    |   Veeam Agent for Linux: veeamjobman.
[30.10.2019 14:08:23] <140405615807424> lpb    |   Version: 3.0.2.1185
[30.10.2019 14:08:23] <140405615807424> lpb    |   PID: 125084
[30.10.2019 14:08:23] <140405615807424> vmb    |   hostname: REDACTED
[30.10.2019 14:08:23] <140405615807424> vmb    |   uname
[30.10.2019 14:08:23] <140405615807424> vmb    |     sysname : Linux
[30.10.2019 14:08:23] <140405615807424> vmb    |     release : 4.9.0-trunk-vyatta-amd64
[30.10.2019 14:08:23] <140405615807424> vmb    |     version : #1 SMP Debian 4.9.151-0vyatta2+1.2 (2019-01-26)
[30.10.2019 14:08:23] <140405615807424> vmb    |     machine : x86_64
[30.10.2019 14:08:23] <140405615807424> lpb    | }
[30.10.2019 14:08:23] <140405615807424> lpbcore| Connecting to veeamservice...
[30.10.2019 14:08:23] <140405615807424>        |   Configuration load.
[30.10.2019 14:08:23] <140405615807424>        |   Configuration load. ok.
[30.10.2019 14:08:23] <140405615807424> lpbcore| Connecting to veeamservice... ok.
[30.10.2019 14:08:23] <140405615807424> lpbman | Main thread has started.
[30.10.2019 14:08:23] <140405615807424> lpbcore| License information:
[30.10.2019 14:08:23] <140405615807424> lpbcore|   License source: Veeam Backup & Replication
[30.10.2019 14:08:23] <140405615807424> lpbcore|   Mode: Server
[30.10.2019 14:08:23] <140405615807424> lpbcore| LpbManSession: Processing commands.
[30.10.2019 14:08:23] <140405615807424> lpbcore|   Sending PID: [125084].
[30.10.2019 14:08:23] <140405615807424> lpbcore|   Sending Session UUID: [{566beef0-f8bc-474d-b637-19d73984b4e7}].
[30.10.2019 14:08:23] <140405615807424> lpbcore|   Waiting for a command.
[30.10.2019 14:08:23] <140405615807424> lpbcore|   LpbManSession: Starting managed backup job.
[30.10.2019 14:08:23] <140405615807424> lpbcore|     Job UUID: [{dfda4e58-0209-48d2-bfb2-e67c6e3c39a9}] (normal priority).
[30.10.2019 14:08:23] <140405615807424> lpbcore|     System information:
[30.10.2019 14:08:23] <140405615807424> lpbcore|       Running [lsb_release -a].
[30.10.2019 14:08:23] <140405615807424> lpbcore|         Q
[30.10.2019 14:08:23] <140405615807424> lpbcore|       Running [cat /etc/*release].
[30.10.2019 14:08:23] <140405615807424> lpbcore|         ID=vyatta
[30.10.2019 14:08:23] <140405615807424> lpbcore|         ID_LIKE=debian
[30.10.2019 14:08:23] <140405615807424> lpbcore|         HOME_URL="http://www.brocade.com/"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         SUPPORT_URL="https://my.brocade.com"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         BUG_REPORT_URL="https://my.brocade.com"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         PRETTY_NAME="AT&T vRouter 5600"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         NAME="AT&T vRouter"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         VERSION="1801w"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         VERSION_ID="1801w"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         BUILD_ID="20190319T2212-vyatta-1801w-amd64-vrouter-B"
[30.10.2019 14:08:23] <140405615807424> lpbcore|         VYATTA_PROJECT_ID="VR:5600:1801:w"
[30.10.2019 14:08:23] <140405615807424> lpbcore|       Running [lsblk].
[30.10.2019 14:08:23] <140405615807424> lpbcore|         NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
[30.10.2019 14:08:23] <140405615807424> lpbcore|         loop0    7:0    0 318.8M  1 loop /lib/live/mount/rootfs/1801w.03192212.squashfs
[30.10.2019 14:08:23] <140405615807424> lpbcore|         sda      8:0    0   1.8T  0 disk
[30.10.2019 14:08:23] <140405615807424> lpbcore|         ├─sda1   8:1    0   244M  0 part
[30.10.2019 14:08:23] <140405615807424> lpbcore|         └─sda2   8:2    0   1.8T  0 part /lib/live/mount/persistence/sda2
Logs excerpts from failed attempt using "Entire computer" mode:

Code: Select all

[30.10.2019 13:40:34] <139837366994688> lpbcore|           Detecting bootloader on device [/dev/sda].
[30.10.2019 13:40:34] <139837366994688> lpbcore|             Detected GRUB 2 (v1.97-1.99).
[30.10.2019 13:40:34] <139837366994688> lpbcore|             Device has GPT partition table.
[30.10.2019 13:40:34] <139837366994688> lpbcore|             Found BIOS boot partition, partition number [1]. Partition size: [255852544].
[30.10.2019 13:40:34] <139837366994688> lpbcore|           Detecting bootloader on device [/dev/sda]. Failed.
[30.10.2019 13:40:34] <139837366994688> lpbcore|         Build GPT disk object for [sda]. Failed.
[30.10.2019 13:40:34] <139837366994688> lpbcore|       Create disk backup object. Failed.
[30.10.2019 13:40:34] <139837366994688> lpbcore| ERR |BIOS boot partition size exceeds limit.
[30.10.2019 13:40:34] <139837366994688> lpbcore| >>  |Failed to detect bootloader on [/dev/sda].
[30.10.2019 13:40:34] <139837366994688> lpbcore| >>  |--tr:GptDiskBackupObjectBuilder: Failed to build backup objects.
[30.10.2019 13:40:34] <139837366994688> lpbcore| >>  |--tr:CDiskObjectsBuilder: Failed to create disk backup object.
[30.10.2019 13:40:34] <139837366994688> lpbcore| >>  |An exception was thrown from thread [139837366994688].
Logs excerpts from failed attempt using "Volume level backup" mode:

Code: Select all

[30.10.2019 13:55:29] <139944599985088> lpbcore|     CJobFilterResolver: prepare mount points
[30.10.2019 13:55:29] <139944599985088> lpbcore|       User input path [/] resolved into mount point [/].
[30.10.2019 13:55:29] <139944599985088> lpbcore|     CJobFilterResolver: prepare mount points ok.
...SNIP...
[30.10.2019 13:55:29] <139944566654720> lpbcore| ManagedBackupJobPerformer: Creating backup.
[30.10.2019 13:55:29] <139944599985088> lpbcore|   LpbManSession: Starting managed backup job. ok.
[30.10.2019 13:55:29] <139944558262016>        | Thread started. Thread id: 139944558262016, parent id: 139944566654720, role: Session checker for Job: {27de2d0c-a2e2-4a41-a728-2cba4b7c649f}.
[30.10.2019 13:55:29] <139944566654720> lpbcore|   BObject ID: [{615ce9ee-6ba0-4ace-b7eb-ba1f78779798}].
[30.10.2019 13:55:30] <139944549869312>        | Thread started. Thread id: 139944549869312, parent id: 139944566654720, role: lease keeper
[30.10.2019 13:55:30] <139944566654720> vmb    |   [SessionLog][info] Preparing to backup.
[30.10.2019 13:55:30] <139944566654720> lpbcore|   Enumerating backup objects.
[30.10.2019 13:55:30] <139944566654720> lpbcore|   Enumerating backup objects. Failed.
[30.10.2019 13:55:30] <139944558262016>        | Thread finished. Role: 'Session checker for Job: {27de2d0c-a2e2-4a41-a728-2cba4b7c649f}.'.
[30.10.2019 13:55:30] <139944566654720> lpbcore| ManagedBackupJobPerformer: Creating backup. Failed.
[30.10.2019 13:55:30] <139944566654720> vmb    | [SessionLog][error] Failed to perform managed backup.
[30.10.2019 13:55:30] <139944566654720> vmb    | [SessionLog][error] There are no objects to backup.
[30.10.2019 13:55:30] <139944566654720> lpbcore| ERR |There are no objects to backup
[30.10.2019 13:55:30] <139944566654720> lpbcore| >>  |Managed backup job has failed.
[30.10.2019 13:55:30] <139944566654720> lpbcore| >>  |An exception was thrown from thread [139944566654720].
Log excerpts from failed attempt using "File level backup (slower)" mode:

Code: Select all

[30.10.2019 14:12:49] <140405615807424> lpbcore|     Job information:
[30.10.2019 14:12:49] <140405615807424> lpbcore|       Job name: [REDACTED]. ID: [{dfda4e58-0209-48d2-bfb2-e67c6e3c39a9}].
[30.10.2019 14:12:49] <140405615807424> lpbcore|       Job source:
[30.10.2019 14:12:49] <140405615807424> lpbcore|         Included [Directory]  /etc.
[30.10.2019 14:12:49] <140405615807424> lpbcore|         Included [Directory]  /config.
[30.10.2019 14:12:49] <140405615807424> lpbcore|         Included [Directory]  /home.
...
[30.10.2019 14:12:49] <140405582477056> lpbcore|   Enumerating backup objects.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Checking whether current system has stable btrfs driver version.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Enumerating file backup objects. snapshot required: true
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Initializing file backup filter.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Including object [{d704a5cc-e2a8-444e-9ebb-bcf9e19b3064}] (value [/etc])
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Including object [{e0fc3608-fcc2-47ba-ba96-fb760d3e35b0}] (value [/config])
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Including object [{f6db06af-ad31-48d7-a145-ae83156b1876}] (value [/home])
[30.10.2019 14:12:49] <140405582477056> lpbcore|       /etc --> /
[30.10.2019 14:12:49] <140405582477056> lpbcore|       /config --> /
[30.10.2019 14:12:49] <140405582477056> lpbcore|       /home --> /
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Enumerating all block devices...
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Ignored devices mask: [].
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skip filtering: [false].
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Verbose logging: [false].
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram0] (1:0) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram1] (1:1) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram2] (1:2) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram3] (1:3) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram4] (1:4) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram5] (1:5) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram6] (1:6) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram7] (1:7) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram8] (1:8) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram9] (1:9) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram10] (1:10) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram11] (1:11) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram12] (1:12) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram13] (1:13) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram14] (1:14) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/ram15] (1:15) with type [ram] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop0] (7:0) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop1] (7:1) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop2] (7:2) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop3] (7:3) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop4] (7:4) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop5] (7:5) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop6] (7:6) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore| WARN|Device [/dev/loop7] (7:7) with type [loop] is not supported for backup and WILL SKIPPED.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Found device: [/dev/sda]. Device number: [8:0]; Type: [scsi].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Link: [/dev/sda].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Link: [/dev/block/8:0].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Link: [/dev/disk/by-path/pci-0000:af:00.0-scsi-0:2:0:0].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Link: [/dev/disk/by-id/wwn-0x600605b00d6949902430bc241571fbf7].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Link: [/dev/disk/by-id/scsi-3600605b00d6949902430bc241571fbf7].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Partition table type: [gpt].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Found device: [/dev/sda1]. Device number: [8:1]; Type: [scsi].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/sda1].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/block/8:1].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-partuuid/97bc28cd-db5e-435d-8c82-2bcb039b5779].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-partlabel/BIOS_PART].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-path/pci-0000:af:00.0-scsi-0:2:0:0-part1].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-id/wwn-0x600605b00d6949902430bc241571fbf7-part1].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-id/scsi-3600605b00d6949902430bc241571fbf7-part1].
[30.10.2019 14:12:49] <140405582477056> lpbcore|         Found device: [/dev/sda2]. Device number: [8:2]; Type: [scsi].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/sda2].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/block/8:2].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-uuid/97a041ed-7049-49fe-b6a1-1f2b5977c788].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-partuuid/566935ff-593b-4985-8fda-dd9739343564].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-partlabel/vRouter].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-label/vRouter].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-path/pci-0000:af:00.0-scsi-0:2:0:0-part2].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-id/wwn-0x600605b00d6949902430bc241571fbf7-part2].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Link: [/dev/disk/by-id/scsi-3600605b00d6949902430bc241571fbf7-part2].
[30.10.2019 14:12:49] <140405582477056> lpbcore|           Filesystem UUID: [97a041ed-7049-49fe-b6a1-1f2b5977c788]; Type: [ext4]; Mount points: [/lib/live/mount/persistence/sda2, /boot, /boot/grub, /lib/live/mount/persistence/sda2/boot/1801w.03192212/grub].
[30.10.2019 14:12:49] <140405582477056> lpbcore|     [3] block devices were detected.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Detecting whether we are running under recovery ISO.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Recovery ISO: [0].
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Enumerating LVM devices...
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Should skip inactive LVs? [true].
[30.10.2019 14:12:49] <140405582477056> lpbcore|     [0] LVM volume groups were detected.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Enumerating BTRFS subvolumes...
[30.10.2019 14:12:49] <140405582477056> lpbcore|     [0] BTRFS subvolumes were detected.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping device {f529560e-fdc2-4234-95d8-8a63012497a3}: filesystem is missing or not mounted.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping partition {f529560e-fdc2-4234-95d8-8a63012497a3}/o1048576l255852544: filesystem is missing or not mounted.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping partition {f529560e-fdc2-4234-95d8-8a63012497a3}/o256901120l1999577808896: mountpoint /lib/live/mount/persistence/sda2 is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping partition {f529560e-fdc2-4234-95d8-8a63012497a3}/o256901120l1999577808896: mountpoint /boot is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping partition {f529560e-fdc2-4234-95d8-8a63012497a3}/o256901120l1999577808896: mountpoint /boot/grub is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Skipping partition {f529560e-fdc2-4234-95d8-8a63012497a3}/o256901120l1999577808896: mountpoint /lib/live/mount/persistence/sda2/boot/1801w.03192212/grub is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Process mountPoints
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /lib/live/mount/persistence/sda2 is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /lib/live/mount/rootfs/1801w.03192212.squashfs is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /boot is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /boot/grub is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /lib/live/mount/persistence/sda2/boot/1801w.03192212/grub is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /opt/vyatta/etc/config is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|       Skipping mountpoint /mnt/huge is not included.
[30.10.2019 14:12:49] <140405582477056> lpbcore|     Process mountPoints ok.
[30.10.2019 14:12:49] <140405500802816>        | Thread finished. Role: 'Session checker for Job: {dfda4e58-0209-48d2-bfb2-e67c6e3c39a9}.'.
[30.10.2019 14:12:49] <140405582477056> lpbcore| ManagedBackupJobPerformer: Creating backup. Failed.
[30.10.2019 14:12:49] <140405582477056> vmb    | [SessionLog][error] Failed to perform managed backup.
[30.10.2019 14:12:49] <140405582477056> vmb    | [SessionLog][error] No objects to backup.
[30.10.2019 14:12:50] <140405582477056> lpbcore| ERR |No objects to backup.
[30.10.2019 14:12:50] <140405582477056> lpbcore| >>  |Managed backup job has failed.
[30.10.2019 14:12:50] <140405582477056> lpbcore| >>  |An exception was thrown from thread [140405582477056].
Looking at the devices and filesystems:

Code: Select all

root@REDACTED:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G   15M  3.1G   1% /run
/dev/sda2       1.8T  9.6G  1.7T   1% /lib/live/mount/persistence/sda2
/dev/loop0      319M  319M     0 100% /lib/live/mount/rootfs/1801w.03192212.squashfs
tmpfs            16G     0   16G   0% /lib/live/mount/overlay
overlay         1.8T  9.6G  1.7T   1% /
tmpfs            16G   11M   16G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
tmpfs           3.2G     0  3.2G   0% /run/user/1000
tmpfs           3.2G     0  3.2G   0% /run/user/1004
 
root@REDACTED:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16268328k,nr_inodes=4067082,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=3262004k,mode=755)
/dev/sda2 on /lib/live/mount/persistence/sda2 type ext4 (rw,noatime,stripe=64,data=ordered)
/dev/loop0 on /lib/live/mount/rootfs/1801w.03192212.squashfs type squashfs (ro,noatime)
tmpfs on /lib/live/mount/overlay type tmpfs (rw,relatime)
overlay on / type overlay (rw,noatime,lowerdir=/live/rootfs/1801w.03192212.squashfs/,upperdir=/live/persistence/sda2/boot/1801w.03192212/persistence/rw,workdir=/live/persistence/sda2/boot/1801w.03192212/persistence/work)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda2 on /boot type ext4 (rw,noatime,stripe=64,data=ordered)
/dev/sda2 on /boot/grub type ext4 (rw,noatime,stripe=64,data=ordered)
/dev/sda2 on /lib/live/mount/persistence/sda2/boot/1801w.03192212/grub type ext4 (rw,noatime,stripe=64,data=ordered)
overlay on /opt/vyatta/etc/config type overlay (rw,noatime,lowerdir=/live/rootfs/1801w.03192212.squashfs/,upperdir=/live/persistence/sda2/boot/1801w.03192212/persistence/rw,workdir=/live/persistence/sda2/boot/1801w.03192212/persistence/work)
nodev on /mnt/huge type hugetlbfs (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=3262000k,mode=700,uid=1000,gid=100)
tmpfs on /run/user/1004 type tmpfs (rw,nosuid,nodev,relatime,size=3262000k,mode=700,uid=1004,gid=100)

At this point, I believe the issue is that the Vyatta is employing dark magic (I have no experience with "overlay" filesystems) and/or RAMdisks, whereas Veeam does not support RAMdisks.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Backing up a physical Vyatta

Post by PTide »

Hi,

You might want to check whether snapshot-less file-level mode works. The mode is agnostic of the type of the source storage (it can grab data even from network shares). Just keep in mind that it is slowish in the current version, but in the upcoming update (which is just around the corner) we are adding some improvements to the file-level mode and the results of our tests are promising so far.

Thanks!
markhensler
Service Provider
Posts: 44
Liked: 4 times
Joined: Mar 10, 2015 6:16 pm
Full Name: Mark Hensler
Contact:

Re: Backing up a physical Vyatta

Post by markhensler »

@P.Tide, great idea. I just gave it a try. Success!

Code: Select all

[07.11.2019 16:03:13] <139873081088960> lpbcore|     Job information:
[07.11.2019 16:03:13] <139873081088960> lpbcore|       Job name: [REDACTED]. ID: [{ef83f08f-b9f9-47c7-986f-5c37fe276ee4}].
[07.11.2019 16:03:13] <139873081088960> lpbcore|       Job source:
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Included [Directory]  /etc.
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Included [Directory]  /home.
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Included [Directory]  /config.
[07.11.2019 16:03:13] <139873081088960> lpbcore|       Job destination:
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Target repository [[REDACTED] REDACTED] ID [{4802da35-02a4-44d5-8fe7-23397807ebeb}].
[07.11.2019 16:03:13] <139873081088960> lpbcore|         VBR server [REDACTED] ID [{939c9962-641c-4d45-932d-e1fad7b07c84}]. FQDN: [REDACTED].
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Connection endpoints: [REDACTED:10006; REDACTED:10006; REDACTED:10006; REDACTED:10006].
[07.11.2019 16:03:13] <139873081088960> lpbcore|       Job options:
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Compression: [Lz4]
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Block size: [KbBlockSize1024]
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Retention max points: [24].
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Pre-freeze: []
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Post-thaw: []
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Pre-job: []
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Post-job: []
[07.11.2019 16:03:13] <139873081088960> lpbcore|         IsSnapshotRequired: [false].
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Retry count: [3]
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Retry delay (ms): [600000]
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Indexing: File system indexing is disabled.
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Schedule: Disabled; Every day at 06:00.
[07.11.2019 16:03:13] <139873081088960> lpbcore|         Active full schedule: Disabled; Every 1 day of every month.
...
[07.11.2019 16:03:13] <139873047758592> lpbcore|   Enumerating backup objects.
[07.11.2019 16:03:13] <139873047758592> lpbcore|     Checking whether current system has stable btrfs driver version.
[07.11.2019 16:03:13] <139873047758592> lpbcore|     Enumerating file backup objects. snapshot required: false
[07.11.2019 16:03:13] <139873047758592> lpbcore|     Initializing file backup filter.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Including object [{18822d63-6809-4f78-af02-36ba32b9980e}] (value [/etc])
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Including object [{79124ad7-3a60-4901-886d-75647ffed727}] (value [/home])
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Including object [{d8316246-f749-40f9-bc4d-0048c2a05ce4}] (value [/config])
[07.11.2019 16:03:13] <139873047758592> lpbcore|       /etc --> /
[07.11.2019 16:03:13] <139873047758592> lpbcore|       /home --> /
[07.11.2019 16:03:13] <139873047758592> lpbcore|       /config --> /
[07.11.2019 16:03:13] <139873047758592> lpbcore|     Process mountPoints
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /lib/live/mount/persistence/sda2 is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /lib/live/mount/rootfs/1801w.03192212.squashfs is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Including mountpoint / without snapshot
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /boot is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /boot/grub is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /lib/live/mount/persistence/sda2/boot/1801w.03192212/grub is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /opt/vyatta/etc/config is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|       Skipping mountpoint /mnt/huge is not included.
[07.11.2019 16:03:13] <139873047758592> lpbcore|     Process mountPoints ok.
[07.11.2019 16:03:13] <139873047758592> lpbcore|   Querying used space on /etc, /home, /config.
[07.11.2019 16:03:13] <139873047758592> lpbcore|   Used space: 2164625817600.
[07.11.2019 16:03:13] <139873047758592> vmb    |   [JobSessionUpdater] Adding progress record size 2164625817600, used size: 2164625817600.
[07.11.2019 16:03:13] <139873047758592> lpbcore|   Taking snapshot.
[07.11.2019 16:03:13] <139873047758592> lpbcore|   No snapshots required
[07.11.2019 16:03:13] <139873047758592> vmb    |   [SessionLog][info] Starting full backup to [REDACTED] REDACTED.
[07.11.2019 16:03:13] <139873047758592> vmb    |   IP endpoints: [REDACTED:2500; REDACTED:2500; REDACTED:2500].
[07.11.2019 16:05:26] <139873047758592>        |   REDACTED:2500 connection status: system:110 (Connection timed out)
[07.11.2019 16:05:26] <139873047758592> net    |   Sending reconnect options: [disabled].
[07.11.2019 16:05:26] <139873047758592> vmb    |   Connection will be encrypted. Management keyset: 'ID: 4aea1c094e7b0d8737bbb16008f04fde (session), keys: 1, repair records: 0 (master keys:)'.
[07.11.2019 16:05:26] <139873022580480>        | Thread started. Thread id: 139873022580480, parent id: 139873047758592, role: Client receiver
...
[07.11.2019 16:05:28] <139873047758592> lpbcore|   Performing file-level backup: [/etc, /home, /config].
[07.11.2019 16:05:28] <139873047758592> vmb    |     [SessionLog][processing] Backing up files /etc, /home, /config.
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Detecting whether we are running under recovery ISO.
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Recovery ISO: [0].
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Starting 0 mount session(s)
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Starting 0 mount session(s) ok.
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Found [1] snapshotless point for bind mounting
[07.11.2019 16:05:28] <139873047758592> lpbcore|     Starting mount bind sessions in [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/]
[07.11.2019 16:05:28] <139873047758592> lpbcore|       Mount subvolume [/]
[07.11.2019 16:05:28] <139873047758592> lpbcore|       Create directory [/tmp/veeam/snapmnt]
[07.11.2019 16:05:28] <139873047758592> lpbcore|       Create directory [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}]
[07.11.2019 16:05:28] <139873047758592> lpbcore|       Bind mount directory [/] to [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/]
[07.11.2019 16:05:28] <139873047758592>        |       Creating child process: /bin/mount with arguments: --read-only, --bind, /, /tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/
[07.11.2019 16:05:28] <139873047758592> vmb    |     Starting backup client for agent with UID [{36214705-847e-4795-b42d-60b70e53a182}]. Client id: [8439]. Reconnect options: [disabled]
[07.11.2019 16:05:28] <139873047758592> vmb    |       IP endpoints: [127.0.0.1:2500].
[07.11.2019 16:05:28] <139873047758592> net    |       Sending reconnect options: [disabled].
[07.11.2019 16:05:28] <139873047758592> vmb    |       Connection will be encrypted. Management keyset: 'ID: a7871ca8e714fc00ebc148f46fb4e7a8 (session), keys: 1, repair records: 0 (master keys:)'.
...
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 0%; processed: [0/2164625817600] read: [0], transfer: [0] speed: 0 bottleneck: 0/0/0/0
[07.11.2019 16:17:33] <139872119551744> vmb    |  pex: 6380/6380/0/6380/10217734281 / 0/4/38/8/9/96
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 0%; processed: [6380/2164625817600] read: [6380], transfer: [10217734281] speed: 7 bottleneck: 0/4/8/96
[07.11.2019 16:17:33] <139872119551744> vmb    |  pex: 26304/26304/0/26304/10217734281 / 73/4/38/8/9/96
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 1%; processed: [26304/2164625817600] read: [26304], transfer: [10217734281] speed: 30 bottleneck: 73/4/8/96
[07.11.2019 16:17:33] <139872119551744> vmb    |  pex: 57914/57914/0/57914/10217734281 / 55/4/38/8/9/21
...
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 98%; processed: [2501291/2164625817600] read: [2501291], transfer: [10219204228] speed: 2907 bottleneck: 31/4/8/21
[07.11.2019 16:17:33] <139872119551744> vmb    |  pex: 2525953/2525953/0/2525953/10219204228 / 31/4/38/8/9/21
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 99%; processed: [2525953/2164625817600] read: [2525953], transfer: [10219204228] speed: 2936 bottleneck: 31/4/8/21
[07.11.2019 16:17:33] <139872119551744> vmb    |  pex: 2544865/2544865/0/2544865/10219204228 / 31/4/38/8/9/21
[07.11.2019 16:17:33] <139872119551744> vmb    | Session progress: 100%; processed: [2544865/2164625817600] read: [2544865], transfer: [10219204228] speed: 2958 bottleneck: 31/4/8/21
[07.11.2019 16:18:13] <139873030973184> vmb    | Lease keeper: sending keep-alive request.
[07.11.2019 16:18:43] <139873047758592> vmb    |     [8439] out:
[07.11.2019 16:18:43] <139873047758592> lpbcore|     Indexing skipped for [/etc, /home, /config]
[07.11.2019 16:18:43] <139873047758592> lpbcore|     Stopping mount bind session [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/]
[07.11.2019 16:18:43] <139873047758592> lpbcore|       Umount directory [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/]
[07.11.2019 16:18:43] <139873047758592>        |       Creating child process: /bin/umount with arguments: /tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}/
[07.11.2019 16:18:43] <139873047758592> lpbcore|       Remove directory [/tmp/veeam/snapmnt/{e5f115f0-b931-4888-81da-0ebab46073b6}]
[07.11.2019 16:18:43] <139873047758592> lpbcore|       Remove directory [/tmp/veeam/snapmnt]
[07.11.2019 16:18:43] <139873047758592> lpbcore|     Checking state for 0 mount session(s)
[07.11.2019 16:18:43] <139873047758592> lpbcore|     Checking state for 0 mount session(s) ok.
[07.11.2019 16:18:43] <139873047758592> lpbcore|     Stopping 0 mount session(s)
...
[07.11.2019 16:18:47] <139873047758592> lpbcore| JOB STATUS: SUCCESS.
Disclaimer: I have yet to test a restore.
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests