Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Job start and failed immediatly

Post by prohand »

Hello,
I installed Veeam agent 1.0.0.499 (64 bits) to my physical server (Proxmox - Debian 8.5).

I installed veeam without problems but I directly encounters an error when launching the backup.

Code: Select all

[15.07.2016 19:18:14] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34142'.
[15.07.2016 19:18:14] <139642372339456>        | Thread started. Thread id: 139642372339456, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34142)
[15.07.2016 19:18:14] <139642372339456> net    | Client connected...
[15.07.2016 19:18:14] <139642372339456> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:14] <139641852253952>        | Thread started. Thread id: 139641852253952, parent id: 139642372339456, role: peer 127.0.0.1:34142
[15.07.2016 19:18:14] <139642372339456>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34142)'.
[15.07.2016 19:18:14] <139641852253952> lpbcore| Starting proxystub protocol dispatch loop.
[15.07.2016 19:18:14] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34144'.
[15.07.2016 19:18:14] <139642355554048>        | Thread started. Thread id: 139642355554048, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34144)
[15.07.2016 19:18:14] <139642355554048> net    | Client connected...
[15.07.2016 19:18:14] <139642355554048> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:14] <139642355554048> lpbcore| Starting new LPB session.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642372339456>        | Thread started. Thread id: 139642372339456, parent id: 139642355554048, role: (async) LPB database session (127.0.0.1:34144)
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048>        |   Closing socket device.
[15.07.2016 19:18:14] <139642355554048> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:14] <139642355554048>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34144)'.
[15.07.2016 19:18:14] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34146'.
[15.07.2016 19:18:14] <139641869039360>        | Thread started. Thread id: 139641869039360, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34146)
[15.07.2016 19:18:14] <139641869039360> net    | Client connected...
[15.07.2016 19:18:14] <139641869039360> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:14] <139641869039360> lpbcore| Starting new LPB session.
[15.07.2016 19:18:14] <139641869039360> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:14] <139641869039360>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34146)'.
[15.07.2016 19:18:14] <139642355554048>        | Thread started. Thread id: 139642355554048, parent id: 139641869039360, role: (async) LPB database session (127.0.0.1:34146)
[15.07.2016 19:18:14] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34148'.
[15.07.2016 19:18:14] <139641860646656>        | Thread started. Thread id: 139641860646656, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34148)
[15.07.2016 19:18:14] <139641860646656> net    | Client connected...
[15.07.2016 19:18:14] <139641860646656> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:14] <139641860646656> lpbcore| Starting new LPB session.
[15.07.2016 19:18:14] <139641860646656> lpbcore|   LpbCfgSession: Disconnecting.
[15.07.2016 19:18:14] <139641860646656>        |     Closing socket device.
[15.07.2016 19:18:14] <139641860646656> lpbcore|   LpbCfgSession: Disconnecting. ok.
[15.07.2016 19:18:14] <139641860646656>        |   Closing socket device.
[15.07.2016 19:18:14] <139641860646656> lpbcore|   LpbCfgSession: Disconnecting.
[15.07.2016 19:18:14] <139641860646656>        |     Closing socket device.
[15.07.2016 19:18:14] <139641860646656> lpbcore|   LpbCfgSession: Disconnecting. ok.
[15.07.2016 19:18:14] <139641860646656>        |   Closing socket device.
[15.07.2016 19:18:14] <139641869039360>        | Thread started. Thread id: 139641869039360, parent id: 139641860646656, role: (async) LPBConfig session (127.0.0.1:34148)
[15.07.2016 19:18:14] <139641860646656> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:14] <139641869039360> lpbcore| LpbCfgSession: Tcp loop.
[15.07.2016 19:18:14] <139641860646656>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34148)'.
[15.07.2016 19:18:17] <139641852253952> lpbcore|   Job execution service: starting worker (manager) process.
[15.07.2016 19:18:17] <139641852253952> lpbcore|     Starting manager process. Session UUID: [{c7e75dea-177c-428e-a63e-abe846eddb80}]. Logs path: [/var/log/veeam/Backup/Backup-Proxmox-Complete/Session_{c7e75dea-177c-428e-a63e-abe846eddb80}/Job.log]
[15.07.2016 19:18:17] <139641852253952> lpbcore|     JobMan has started. PID: [30383].
[15.07.2016 19:18:17] <139641852253952> lpbcore|     Manager started with PID [30383]. Waiting connection.
[15.07.2016 19:18:17] <139641860646656>        | Thread started. Thread id: 139641860646656, parent id: 139641852253952, role: Manager process [30383] shutdown handler.
[15.07.2016 19:18:17] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34150'.
[15.07.2016 19:18:17] <139641843861248>        | Thread started. Thread id: 139641843861248, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34150)
[15.07.2016 19:18:17] <139641843861248> net    | Client connected...
[15.07.2016 19:18:17] <139641843861248> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:17] <139642397517568>        | Thread started. Thread id: 139642397517568, parent id: 139641843861248, role: peer 127.0.0.1:34150
[15.07.2016 19:18:17] <139642397517568> lpbcore| Starting proxystub protocol dispatch loop.
[15.07.2016 19:18:17] <139641843861248>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34150)'.
[15.07.2016 19:18:17] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34152'.
[15.07.2016 19:18:17] <139642363946752>        | Thread started. Thread id: 139642363946752, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34152)
[15.07.2016 19:18:17] <139642363946752> net    | Client connected...
[15.07.2016 19:18:17] <139642363946752> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:17] <139642363946752> lpbcore| Starting new LPB session.
[15.07.2016 19:18:17] <139642363946752> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:17] <139641843861248>        | Thread started. Thread id: 139641843861248, parent id: 139642363946752, role: (async) LPB database session (127.0.0.1:34152)
[15.07.2016 19:18:17] <139642363946752>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34152)'.
[15.07.2016 19:18:17] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34154'.
[15.07.2016 19:18:17] <139642631345920>        | Thread started. Thread id: 139642631345920, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34154)
[15.07.2016 19:18:17] <139642631345920> net    | Client connected...
[15.07.2016 19:18:17] <139642631345920> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:17] <139642631345920>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34154)'.
[15.07.2016 19:18:17] <139641852253952> lpbcore|   Sending command [StartBackupJob] to manager with PID [30383]. Job ID: [30383]
[15.07.2016 19:18:17] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34156'.
[15.07.2016 19:18:17] <139642363946752>        | Thread started. Thread id: 139642363946752, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34156)
[15.07.2016 19:18:17] <139642363946752> net    | Client connected...
[15.07.2016 19:18:17] <139642363946752> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:17] <139642363946752> lpbcore| Starting new LPB session.
[15.07.2016 19:18:17] <139642363946752> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:17] <139642631345920>        | Thread started. Thread id: 139642631345920, parent id: 139642363946752, role: (async) LPB database session (127.0.0.1:34156)
[15.07.2016 19:18:17] <139642363946752>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34156)'.
[15.07.2016 19:18:17] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34158'.
[15.07.2016 19:18:17] <139642622953216>        | Thread started. Thread id: 139642622953216, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34158)
[15.07.2016 19:18:17] <139642622953216> net    | Client connected...
[15.07.2016 19:18:17] <139642622953216> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:17] <139642622953216> lpbcore| Starting new LPB session.
[15.07.2016 19:18:17] <139642622953216> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:17] <139642622953216>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34158)'.
[15.07.2016 19:18:17] <139642363946752>        | Thread started. Thread id: 139642363946752, parent id: 139642622953216, role: (async) LPB database session (127.0.0.1:34158)
[15.07.2016 19:18:17] <139642397517568> lpbcore| WARN|Mount point [/proc/sys/fs/binfmt_misc] of device [binfmt_misc] is already assigned to the device [systemd-1].
[15.07.2016 19:18:17] <139642397517568> lpbcore|   Executing custom script: [mount]. Arguments: [-t cifs -o username=backupproxmox,password=*,rw,soft //192.168.0.250/VeeamBackup /tmp/veeam/192.168.0.250VeeamBackup]
[15.07.2016 19:18:17] <139641835468544>        | Thread started. Thread id: 139641835468544, parent id: 139642397517568, role: script error accum
[15.07.2016 19:18:17] <139642405910272>        | Thread started. Thread id: 139642405910272, parent id: 139642397517568, role: script output redirector
[15.07.2016 19:18:18] <139641835468544>        | Thread finished. Role: 'script error accum'.
[15.07.2016 19:18:18] <139642405910272>        | Thread finished. Role: 'script output redirector'.
[15.07.2016 19:18:18] <139642397517568> lpbcore|   Executing custom script: [mount]. Arguments: [-t cifs -o username=backupproxmox,password=*,rw,soft //192.168.0.250/VeeamBackup /tmp/veeam/192.168.0.250VeeamBackup] ok.
[15.07.2016 19:18:18] <139642397517568> lpbcore|   Snapshot service: creating snapshot.
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Mount point [/proc/sys/fs/binfmt_misc] of device [binfmt_misc] is already assigned to the device [systemd-1].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:33]. Mount point: [/run/lxcfs/controllers/pids].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:32]. Mount point: [/run/lxcfs/controllers/hugetlb].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:31]. Mount point: [/run/lxcfs/controllers/perf_event].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:30]. Mount point: [/run/lxcfs/controllers/net_cls,net_prio].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:29]. Mount point: [/run/lxcfs/controllers/freezer].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:28]. Mount point: [/run/lxcfs/controllers/devices].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:27]. Mount point: [/run/lxcfs/controllers/memory].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:26]. Mount point: [/run/lxcfs/controllers/blkio].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:25]. Mount point: [/run/lxcfs/controllers/cpu,cpuacct].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:24]. Mount point: [/run/lxcfs/controllers/cpuset].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:22]. Mount point: [/run/lxcfs/controllers/name=systemd].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:45]. Mount point: [/mnt/pve/Backup_Web-Mysql-FTP].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:45]. Mount point: [/mnt/pve/Backup_Zimbra].
[15.07.2016 19:18:18] <139642397517568> lpbcore| WARN|Multiple mountpoints for device [0:45]. Mount point: [/mnt/pve/Backup_Sophos].
[15.07.2016 19:18:18] <139642397517568> lpbcore|   GPT type: [{21686148-6449-6e6f-744e-656564454649}].
[15.07.2016 19:18:18] <139642397517568> lpbcore|   GPT type: [{c12a7328-f81f-11d2-ba4b-00a0c93ec93b}].
[15.07.2016 19:18:18] <139642397517568> lpbcore|   GPT type: [{e6d6d379-f507-44c2-a23c-238f2a3df928}].
[15.07.2016 19:18:18] <139642397517568> lpbcore|   Enumerating LVM volume groups...
[15.07.2016 19:18:18] <139642397517568> lpbcore|     LVM volume group: [pve].
[15.07.2016 19:18:18] <139642397517568> lpbcore|     Enumerating logical volumes for LVM volume group: [pve].
[15.07.2016 19:18:18] <139642397517568> lpbcore|   [1] LVM volume groups were detected.
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Checking whether veeamsnap kernel module is loaded.
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Module is not loaded.
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Loading kernel module veeamsnap with parameters [deferiocache=0 debuglogging=0].
[15.07.2016 19:18:18] <139642397517568>        |     Argument [modprobe].
[15.07.2016 19:18:18] <139642397517568>        |     Argument [veeamsnap].
[15.07.2016 19:18:18] <139642397517568>        |     Argument [deferiocache=0].
[15.07.2016 19:18:18] <139642397517568>        |     Argument [debuglogging=0].
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Loading kernel module veeamsnap with parameters [deferiocache=0 debuglogging=0]. Failed.
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Opening VeeamSnap control.
[15.07.2016 19:18:18] <139642397517568> vsnap  |   Closing VeeamSnap control.
[15.07.2016 19:18:18] <139642397517568> lpbcore| ERR |Child execution has failed. Exit code: [1].
[15.07.2016 19:18:18] <139642397517568> lpbcore| >>  |--tr:Failed to execute [modprobe].
[15.07.2016 19:18:18] <139642397517568> lpbcore| >>  |Failed to load module [veeamsnap] with parameters [deferiocache=0 debuglogging=0].
[15.07.2016 19:18:18] <139642397517568> lpbcore| >>  |--tr:Unable to create snapshot for session [{c7e75dea-177c-428e-a63e-abe846eddb80}].
[15.07.2016 19:18:18] <139642397517568> lpbcore| >>  |--tr:Failed to execute method [0] for class [N10lpbcorelib11interaction9proxystub21CResourcesServiceStubE].
[15.07.2016 19:18:18] <139642397517568> lpbcore| >>  |An exception was thrown from thread [125822720].
[15.07.2016 19:18:20] <139642639738624> net    |   Accepted incoming vRPC connection from '127.0.0.1:34162'.
[15.07.2016 19:18:20] <139642622953216>        | Thread started. Thread id: 139642622953216, parent id: 139642639738624, role: Client processor thread (127.0.0.1:34162)
[15.07.2016 19:18:20] <139642622953216> net    | Client connected...
[15.07.2016 19:18:20] <139642622953216> net    | Received reconnect options: [disabled].
[15.07.2016 19:18:20] <139642622953216> lpbcore| Starting new LPB session.
[15.07.2016 19:18:20] <139642622953216> lpbcore| Starting new LPB session. ok.
[15.07.2016 19:18:20] <139642622953216>        | Thread finished. Role: 'Client processor thread (127.0.0.1:34162)'.
[15.07.2016 19:18:20] <139642405910272>        | Thread started. Thread id: 139642405910272, parent id: 139642622953216, role: (async) LPBConfig session (127.0.0.1:34162)
[15.07.2016 19:18:20] <139642405910272> lpbcore| LpbCfgSession: Tcp loop.
[15.07.2016 19:18:20] <139642405910272> lpbcore|   LpbCfgSession: Session is finished.
[15.07.2016 19:18:20] <139642405910272> lpbcore|     Session ID: [{c7e75dea-177c-428e-a63e-abe846eddb80}].
[15.07.2016 19:18:20] <139642405910272> lpbcore|     LpbCfgSession: Finding Manager [{c7e75dea-177c-428e-a63e-abe846eddb80}].
[15.07.2016 19:18:20] <139642405910272> lpbcore|       Manager [{c7e75dea-177c-428e-a63e-abe846eddb80}] is found and active.
[15.07.2016 19:18:20] <139642405910272> lpbcore|     LpbCfgSession: Finding Manager [{c7e75dea-177c-428e-a63e-abe846eddb80}]. ok.
[15.07.2016 19:18:20] <139642405910272> lpbcore|   LpbCfgSession: Session is finished. ok.
[15.07.2016 19:18:20] <139642405910272> lpbcore|   LpbCfgSession: Disconnecting.
[15.07.2016 19:18:20] <139642405910272>        |     Closing socket device.
[15.07.2016 19:18:20] <139642405910272> lpbcore|   LpbCfgSession: Disconnecting. ok.
[15.07.2016 19:18:20] <139642405910272> lpbcore| LpbCfgSession: Tcp loop. ok.
[15.07.2016 19:18:20] <139642405910272>        | Thread finished. Role: '(async) LPBConfig session (127.0.0.1:34162)'.
[15.07.2016 19:18:20] <139642631345920>        | Closing socket device.
[15.07.2016 19:18:20] <139642363946752>        | Closing socket device.
[15.07.2016 19:18:20] <139641843861248>        | Closing socket device.
[15.07.2016 19:18:20] <139642631345920>        | Thread finished. Role: '(async) LPB database session (127.0.0.1:34156)'.
[15.07.2016 19:18:20] <139642363946752>        | Thread finished. Role: '(async) LPB database session (127.0.0.1:34158)'.
[15.07.2016 19:18:20] <139641843861248>        | Thread finished. Role: '(async) LPB database session (127.0.0.1:34152)'.
[15.07.2016 19:18:20] <139642397517568> lpbcore| Starting proxystub protocol dispatch loop. ok.
[15.07.2016 19:18:20] <139642397517568>        | Closing socket device.
[15.07.2016 19:18:20] <139642397517568>        | Thread finished. Role: 'peer 127.0.0.1:34150'.
[15.07.2016 19:18:20] <139641860646656> lpbcore| Executing custom script: [umount]. Arguments: [-l /tmp/veeam/192.168.0.250VeeamBackup]
[15.07.2016 19:18:20] <139641835468544>        | Thread started. Thread id: 139641835468544, parent id: 139641860646656, role: script output redirector
[15.07.2016 19:18:20] <139641827075840>        | Thread started. Thread id: 139641827075840, parent id: 139641860646656, role: script error accum
[15.07.2016 19:18:20] <139641835468544>        | Thread finished. Role: 'script output redirector'.
[15.07.2016 19:18:20] <139641827075840>        | Thread finished. Role: 'script error accum'.
[15.07.2016 19:18:20] <139641860646656> lpbcore| Executing custom script: [umount]. Arguments: [-l /tmp/veeam/192.168.0.250VeeamBackup] ok.
[15.07.2016 19:18:20] <139641860646656> lpbcore| Manager process [30383] has been shutdown.
[15.07.2016 19:18:20] <139641860646656>        | Thread finished. Role: 'Manager process [30383] shutdown handler.'.
[15.07.2016 19:18:29] <139641852253952> lpbcore| Starting proxystub protocol dispatch loop. ok.
[15.07.2016 19:18:29] <139641852253952>        | Closing socket device.
[15.07.2016 19:18:29] <139641852253952>        | Thread finished. Role: 'peer 127.0.0.1:34142'.
[15.07.2016 19:18:29] <139642372339456>        | Closing socket device.
[15.07.2016 19:18:29] <139642372339456>        | Thread finished. Role: '(async) LPB database session (127.0.0.1:34144)'.
[15.07.2016 19:18:29] <139641869039360> lpbcore| LpbCfgSession: Tcp loop. Failed.
[15.07.2016 19:18:29] <139641869039360> lpbcore| ERR |LpbCfgSession failed.
[15.07.2016 19:18:29] <139641869039360> lpbcore| >>  |read: End of file
[15.07.2016 19:18:29] <139641869039360> lpbcore| >>  |--tr:Cannot read data from the socket. Requested data size: [4].
[15.07.2016 19:18:29] <139641869039360> lpbcore| >>  |An exception was thrown from thread [-402655488].
[15.07.2016 19:18:29] <139641869039360>        | Thread finished. Role: '(async) LPBConfig session (127.0.0.1:34148)'.
[15.07.2016 19:18:29] <139642355554048> lpbcore| ERR |LpbDbSession failed.
[15.07.2016 19:18:29] <139642355554048> lpbcore| >>  |read: End of file
[15.07.2016 19:18:29] <139642355554048> lpbcore| >>  |--tr:Cannot read data from the socket. Requested data size: [4].
[15.07.2016 19:18:29] <139642355554048> lpbcore| >>  |An exception was thrown from thread [83859200].
[15.07.2016 19:18:29] <139642355554048>        | Thread finished. Role: '(async) LPB database session (127.0.0.1:34146)'.
[15.07.2016 19:18:41] <139642389124864> lpbcore|   Job manager process with PID [30383] is terminating.
[15.07.2016 19:18:41] <139642380732160> lpbcore|   Terminating job manager process with PID [30383].
[15.07.2016 19:18:41] <139642380732160>        |   Closing socket device.
Thank you for your help
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Job start and failed immediatly

Post by PTide »

Hi,

Please try to delete and reinstall veeam and veeamsnap packages and run the backup job again. Btw, what are the job settings? Do you have any conatiners in your system? Please archive and share the /var/log/veeam directory so we can pass it to our QA team.

Thank you.
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

Hi,

Delete and reinstall not solved the problem.

Job Settings :

Backup Mode : Entire Machine
Destination : Shared Folder
Restore Point : 7

Logs :

https://1drv.ms/u/s!AiFY3S6HqjUQauKjmOVQjUOXP8Q

Thanks
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

It looks like modprobe is failing to load the kernel driver. Under which user are you logged in on the system and trying to perform the backup? Could you try to do the following:
lsmod | grep veeam

If this gives a result could you try:
modprobe -r veeamsnap

And then open up 'veeam' and start the job again.

Also please post the list provided by 'lsblk' and 'dmesg -T' if possible.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

The result :

Code: Select all

root@proxmox:~#
root@proxmox:~# lsmod | grep veeam
root@proxmox:~# modprobe -r veeamsnap
modprobe: FATAL: Module veeamsnap not found.
root@proxmox:~#
And command "dmesg -T"

Code: Select all

[Fri Jul  8 14:40:41 2016] r8169 0000:02:00.0 eth1: link up
[Fri Jul  8 14:40:41 2016] vmbr1: port 1(eth1) entered forwarding state
[Fri Jul  8 14:40:41 2016] vmbr1: port 1(eth1) entered forwarding state
[Fri Jul 15 18:50:50 2016] systemd-sysv-generator[26323]: Ignoring creation of an alias umountiscsi.service for itself
[Fri Jul 15 18:50:50 2016] systemd-sysv-generator[26354]: Ignoring creation of an alias umountiscsi.service for itself
[Fri Jul 15 18:50:50 2016] systemd-sysv-generator[26378]: Ignoring creation of an alias umountiscsi.service for itself
[Fri Jul 15 18:53:08 2016] FS-Cache: Netfs 'cifs' registered for caching
[Fri Jul 15 18:53:08 2016] Key type cifs.spnego registered
[Fri Jul 15 18:53:08 2016] Key type cifs.idmap registered
[Fri Jul 15 18:53:09 2016] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE
[Fri Jul 15 18:53:09 2016] CIFS VFS: Send error in SessSetup = -13
[Fri Jul 15 18:53:09 2016] CIFS VFS: cifs_mount failed w/return code = -13
[Sun Jul 17 20:12:59 2016] systemd-sysv-generator[1490]: Ignoring creation of an alias umountiscsi.service for itself
[Sun Jul 17 20:13:00 2016] systemd-sysv-generator[1526]: Ignoring creation of an alias umountiscsi.service for itself
[Sun Jul 17 20:13:00 2016] systemd-sysv-generator[1541]: Ignoring creation of an alias umountiscsi.service for itself
[Sun Jul 17 20:13:32 2016] systemd-sysv-generator[1846]: Ignoring creation of an alias umountiscsi.service for itself
[Sun Jul 17 20:13:32 2016] systemd-sysv-generator[1877]: Ignoring creation of an alias umountiscsi.service for itself
[Sun Jul 17 20:13:32 2016] systemd-sysv-generator[1901]: Ignoring creation of an alias umountiscsi.service for itself
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

It looks like the installation isn't succesfull. Could you upload the veeamsnap deb file and try to install it with dpkg -i veeamsnap*

Please let us know the output as currently it seems to be missing.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

Error in installation :

Code: Select all

root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb# dpkg -i * veeamsnap
(Reading database ... 62252 files and directories currently installed.)
Preparing to unpack veeam_1.0.0.499_amd64.deb ...
Unpacking veeam (1.0.0.499) over (1.0.0.499) ...
Preparing to unpack veeamsnap_1.0.0.499_all.deb ...

------------------------------
Deleting module version: 1.0.0.499
completely from the DKMS tree.
------------------------------
Done.
Unpacking veeamsnap (1.0.0.499) over (1.0.0.499) ...
dpkg: error processing archive veeamsnap (--install):
 cannot access archive: No such file or directory
Setting up veeamsnap (1.0.0.499) ...
Loading new veeamsnap-1.0.0.499 DKMS files...
Building for 4.4.10-1-pve and 4.4.13-1-pve
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up veeam (1.0.0.499) ...
Processing triggers for systemd (215-17+deb8u4) ...
Errors were encountered while processing:
 veeamsnap
root@proxmo
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

I notice your kernel headers are missing and this makes it fail.

Could you perform the following:
apt-get update
apt-get install linux-headers-`uname -r`

And afterwards try the install again:
cd ~/VeeamAgentLinux_1.0.0.499BETA/x64/deb
dpkg -i *

Let me know the output to see if everything is ok.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

It's an proxmox installation.

Code: Select all

root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb# apt-get install linux-headers-`uname -r`
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package linux-headers-4.4.10-1-pve
E: Couldn't find any package by regex 'linux-headers-4.4.10-1-pve'
root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb#
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

How about:
apt-get install pve-headers-4.4.10-1-pve

Or do a search and get that package:
apt-cache search pve-headers

I currently don't have a proxmox installation so I don't know the exact package name however they shouldn't be hard to find.

To be sure this can only be done on the host and not within a VM.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

Code: Select all

root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb# apt-cache search pve-headers
pve-headers-4.4.8-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.2.3-2-pve - The Proxmox PVE Kernel Headers
pve-headers-4.2.6-1-pve - The Proxmox PVE Kernel Headers
pve-headers - Latest Proxmox VE Kernel Headers
pve-headers-4.2.3-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.4.10-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.2.8-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.4.13-2-pve - The Proxmox PVE Kernel Headers
pve-headers-4.4.6-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.4.13-1-pve - The Proxmox PVE Kernel Headers
pve-headers-4.2.2-1-pve - The Proxmox PVE Kernel Headers
root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb# apt-get install pve-headers-4.4.10-1-pve
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  pve-headers-4.4.10-1-pve
0 upgraded, 1 newly installed, 0 to remove and 10 not upgraded.
Need to get 7,249 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://download.proxmox.com/debian/ jessie/pve-no-subscription pve-headers-4.4.10-1-pve amd64 4.4.10-54 [7,249 kB]
Fetched 7,249 kB in 0s (11.7 MB/s)
Selecting previously unselected package pve-headers-4.4.10-1-pve.
(Reading database ... 62252 files and directories currently installed.)
Preparing to unpack .../pve-headers-4.4.10-1-pve_4.4.10-54_amd64.deb ...
Unpacking pve-headers-4.4.10-1-pve (4.4.10-54) ...
Setting up pve-headers-4.4.10-1-pve (4.4.10-54) ...
root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb# dpkg -i * veeamsnap
(Reading database ... 82904 files and directories currently installed.)
Preparing to unpack veeam_1.0.0.499_amd64.deb ...
Unpacking veeam (1.0.0.499) over (1.0.0.499) ...
Preparing to unpack veeamsnap_1.0.0.499_all.deb ...

------------------------------
Deleting module version: 1.0.0.499
completely from the DKMS tree.
------------------------------
Done.
Unpacking veeamsnap (1.0.0.499) over (1.0.0.499) ...
dpkg: error processing archive veeamsnap (--install):
 cannot access archive: No such file or directory
Setting up veeamsnap (1.0.0.499) ...
Loading new veeamsnap-1.0.0.499 DKMS files...
Building for 4.4.10-1-pve and 4.4.13-1-pve
Building initial module for 4.4.10-1-pve
Done.

veeamsnap:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.4.10-1-pve/kernel/drivers/block/

depmod....

DKMS: install completed.
Module build for the currently running kernel was skipped since the
kernel source for this kernel does not seem to be installed.
Setting up veeam (1.0.0.499) ...
Processing triggers for systemd (215-17+deb8u4) ...
Errors were encountered while processing:
 veeamsnap
root@proxmox:~/VeeamAgentLinux_1.0.0.499BETA/x64/deb#
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

I will setup a proxmox machine and do some testing. I will get back with info asap.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

Thank you :)
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

So I did an install of a debian 8.5 (64bit) and installed proxmox on it (the latest release is: Proxmox Virtual Environment 4.2-17/e1400248).
To make sure I didn't have any leftovers from the old debian kernel I removed these as well and after a reboot the proxmox system was up & running.

I then installed the linux headers (or in this case pve headers):
apt-get install pve-headers-`uname -r`
apt-get install pve-headers


Afterwards I installed veeamsnap & veeam and it worked.

However what might be the issue for you is the missing link to the pve headers in your system. I don't know if this is a live system or a test system you are working on so here are 2 options:
ONLY DO THIS IF IT IS A TEST MACHINE: reboot the server and see if it is possible to load the veeamsnap module.
modprobe veeamsnap
lsmod | grep veeamsnap

If it is a production machine:
Please go to the folder: /lib/modules/4.4.10-1-pve and do ls -al and paste the output of this.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
prohand
Novice
Posts: 9
Liked: never
Joined: Jul 15, 2016 5:24 pm
Full Name: Kevin
Contact:

Re: Job start and failed immediatly

Post by prohand »

Hello,

Backup tonight has operated without other nothing.
I did :
apt-get update && apt-get dist-upgrade && apt-get upgrade
And reboot my server.
I restart the Job and is successful.

Thanks
nielsengelen
Product Manager
Posts: 5619
Liked: 1177 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Job start and failed immediatly

Post by nielsengelen »

The reboot resulted in loading the kernel headers which were needed. Great to know it works.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
Post Reply

Who is online

Users browsing this forum: No registered users and 9 guests