
I have been doing some testing, and I noted some issues, areas for enhancement:
1. Backup, then restore to a new VM, the virt-io bridge-nics get mixed up, for example the 'dlan' becomes 'lan' and 'lan' becomes 'dlan' etc.
2. Does not maintain the NIC macs on restore, nor the smbios string, looks like it generates new ones.
>> Are we backing up the config? we need to permit the user to choose if we want to keep vm guest config values as stored in backup or allow generation of new vals where appropriate, like the Proxmox Backup server functionality does.
3. Can backing-up VMs with thin-provisioned disks only backup actual data and not waste time by processing the whole thinly provisioned drive? i.e. I have only 2GB of data in an 8TB thin-provisioned disk, and I have to wait a crazy amount of time for it to process 8TB of nothing. This isn't an issue with Hyper-V backups using dynamically expanding disks, can the same not be achieved for Proxmox thin-provisioned disks?
4. Can the Worker VM MTU match the bridge MTU? (normally if you specify 1 as the MTU value in Proxmox virt-io bridge nic config, it uses the bridge-nic's MTU automatically), current workaround is to change the worker VM MTU manually, not ideal of course.
5. Proxmox backups are not stored in the storage-pool in same fashion as other non-PVE Veeam jobs, i.e. a job-name folder in the pool contains all the vm backups. Currently all the PVE backed-up VMs are just dumped into the storage pool root as VM folders, a stressful annoyance, the storage should match existing Veeam jobs well structured folder topology.
6. After I restore a backup to a new VM on the same PVE host, and then later delete it from PVE, then when I run the same backup job originally used to do the backup which is configured to backup all vms on the pve host with no explicitly defined vms, for some reason I get the following error show up, and the backup job fails, I should not be seeing this error message?
