Host-based backup of KVM-based VMs (Red Hat Virtualization, Oracle Linux Virtualization Manager and Proxmox VE)
Post Reply
Adrian-Cox
Service Provider
Posts: 26
Liked: 5 times
Joined: Aug 22, 2023 11:30 am
Full Name: Adrian Cox
Contact:

Proxmox restores failing

Post by Adrian-Cox »

Hi,

I'm trying to restore some VMs to a Proxmox 8.3 new deployment. They were backed up using VBR 12.3 on a vSphere 8 hypervisor and restoring with the same version of VBR to PVE. I'm having some errors with the entire VM restores to PVE though. There's no pattern as such that I can fine, although larger VMs (~6TB) seems to be failing after 1:14 to 1:15 minutes give or take a minute. Support suggest it's something within Proxmox which it may well be, however I'm wondering if anyone else has come across this? Support case is 07590196 if anyone from Veeam can take a look and offer any advise? It's getting quite important that I crack this and get the restores back to the new PVE server. Parts of the log files support have highlighted are as follows:

While reviewing the logs I could see the following behavior during the restore process:
• 6 disks are mapped on the Proxmox Cluster

Code: Select all

2025-02-02 22:25:30.5471 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "D1286FBCB6A7721A0023EC7BED0B40AB" to block devices...
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: The disk "D1286FBCB6A7721A0023EC7BED0B40AB" has been mapped to the block device "drive-sata0" using files "/dev/zvol/Data/vm-113-disk-4":""
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "E089D29C6DA891DCF5A7C23EAC89BD38" to block devices...
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: The disk "E089D29C6DA891DCF5A7C23EAC89BD38" has been mapped to the block device "drive-sata1" using files "/dev/zvol/Data/vm-113-disk-5":""
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "DD614D7C8193556B20A686E1965B2CC9" to block devices...
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: The disk "DD614D7C8193556B20A686E1965B2CC9" has been mapped to the block device "drive-sata2" using files "/dev/zvol/Data/vm-113-disk-6":""
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "2E7044557627B7E400B8BBEF13A77837" to block devices...
2025-02-02 22:25:30.5482 00026 [1530] INFO  | [NbdEngine]: The disk "2E7044557627B7E400B8BBEF13A77837" has been mapped to the block device "drive-sata3" using files "/dev/zvol/Data/vm-113-disk-7":""
2025-02-02 22:25:30.5487 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "58153C9711826370336D28D5315C3E62" to block devices...
2025-02-02 22:25:30.5487 00026 [1530] INFO  | [NbdEngine]: The disk "58153C9711826370336D28D5315C3E62" has been mapped to the block device "drive-sata4" using files "/dev/zvol/Data/vm-113-disk-8":""
2025-02-02 22:25:30.5487 00026 [1530] INFO  | [NbdEngine]: Mapping the disk "3A1B4E736A51948D6262FB323ADB0960" to block devices...
2025-02-02 22:25:30.5487 00026 [1530] INFO  | [NbdEngine]: The disk "3A1B4E736A51948D6262FB323ADB0960" has been mapped to the block device "drive-sata5" using files "/dev/zvol/Data/vm-113-disk-9":""
2025-02-02 22:25:30.5504 00026 [1530] INFO  | [NbdEngine]: Performing export "VeeamTmp_exported_drive-sata0" of the device "#block178":"drive-sata0"
2025-02-02 22:25:30.5516 00026 [1530] INFO  | [QmpCommands]: Exporting the device "#block178" with the export name "VeeamTmp_exported_drive-sata0"...
• then the restore process is started

Code: Select all

2025-02-02 22:25:30.5712 00026 [1530] INFO  | [VbrRestoreProgressReporter]: OnStartDiskRestore
2025-02-02 22:25:30.5712 00026 [1530] INFO  | [NbdEngine]: Restoring the disk "D1286FBCB6A7721A0023EC7BED0B40AB" (key "XXXXXXXX_12-flat.vmdk")
2025-02-02 22:25:30.5712 00026 [1530] INFO  | [NbdEngine]: Zeroed regions will NOT be written on the disk ["XXXXXXXX_12-flat.vmdk"] on ["zfspool"]
2025-02-02 22:25:30.5737 00026 [1530] INFO  | [SshClientUtils]: Start executing ssh command "comm -23 <(seq 49152 65535 | sort) <(ss -Htan | awk '{{print $4}}' | cut -d':' -f2 | sort -u) | shuf | head -n 1"
• then the Worker is failing to communicate with SshQmpSocket on the Proxmox Cluster side:

Code: Select all

2025-02-02 22:35:12.2928 00031 [1530] INFO  | [MemoryLoggerWorkflow][MemoryLogger]: GC mem: 26209608, Process mem: 506036224
2025-02-02 22:39:41.4380 00024 [1530] INFO  | [ForwardedSocket]: Renci.SshNet.Common.ExceptionEventArgs
2025-02-02 22:39:41.4408 00004 [1530] INFO  | [SshQmpSocket]: An error occurred while executing the socat command on "/var/run/qemu-server/113.qmp":55294: Renci.SshNet.Common.SshConnectionException: An established connection was aborted by the server.
2025-02-02 22:55:52.4607 00027 [1530] INFO  | [VbrVmDiskRestore]: Successfully completed writer tasks
2025-02-02 22:55:55.0978 00027 [1530] ERROR | [SshQmpSocket]: Failed to connect to the QMP device using SSH: Veeam.Vbf.Common.Exceptions.ExceptionWithDetail: Failed to connect to the QMP socket: SSH connection is inactive.
2025-02-02 22:55:55.0978 00027 [1530] ERROR | [SshQmpSocket]:    at Veeam.Vbf.BackupAgent.BackupProxmox.Qmp.SshQmpSocket.ConnectAsync(String path, CancellationToken cancellationToken)
2025-02-02 22:55:55.0978 00027 [1530] ERROR | [HotAddConfigurator]: Failed to connect to the worker QMP socket: Veeam.Vbf.Common.Exceptions.ExceptionWithDetail: Failed to connect to the QMP socket: SSH connection is inactive.
2025-02-02 22:55:55.0978 00027 [1530] ERROR | [HotAddConfigurator]:    at Veeam.Vbf.BackupAgent.BackupProxmox.Qmp.SshQmpSocket.ConnectAsync(String path, CancellationToken cancellationToken)
Any pointers or help appreciated!

Thanks
Mildur
Product Manager
Posts: 10300
Liked: 2749 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Proxmox restores failing

Post by Mildur »

Hi Adrian

I suggest to continue with our customer support for now.
According to my colleague, QA is already involved in your case.

Best,
Fabian
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests