I'm trying to restore a VM on a remote esxi host from "backup-copy" job.
The source VM is latency-sensitive.
The target host is "slower" than source.
During restore operation VM I'm getting following error
Restore job failed Error: A specified parameter was not correct: spec.cpuAllocation (The latency-sensitive virtual machine must have the CPU reservation set to at least 9996 MHz (the number of low latency virtual CPUs multiplied by the measured physical CPU speed). To disable this check, set latency.enforceCpuMin to FALSE in the virtual machine configuration.)
I don't see a way to change config during / before restore. Any suggestions ?
Small correction/update.
The source host has lower per-core frequency. During restore operation VBR attempts to create a VM with same setting including CPU reservation and latency sensitivity settings which fails because for latency sensitive VMs 100% of mem and CPU resource should be reserved.
Did you try to change VM configuration, back it up, and run restore once again? As stated in the error message:
To disable this check, set latency.enforceCpuMin to FALSE in the virtual machine configuration.
One more option might be Instant VM Recovery to the original host and further replication/quick migration to the remote one. Also, you can create a new VM on the remote host and perform VM hard disk restore.
If none of the suggestions above help, please open a support case, upload debug logs and provide a support case ID over here for our reference.
Thank you for your feedback. To answer your questions:
* No I didn't try instant recovery I would guess it will have same limitations
* I don't need to restore VM to original host, the purpose of this test is verify recovery process. But to avoid any doubts yes I can restore to original host.
* Yes I can restore individual disks
I can submit a ticket, just wondering if somebody faced similar problem. I see one for "SureBackup" but it's marked as resolved ~1y ago.
The idea was to run Instant Recovery to restore the VM to the original host and replicate/migrate it to the target one. However, now I think that such a replication/migration will be failed with the same error. I believe it will be best to let our support engineers have a look at your infrastructure and find the most appropriate solution. You can rely on the hard disk restore workaround while our engineers are researching the issue.
While I'm waiting for a feedback on the case. I would like to bring this thread up for a more generic discussion.
So what is a recommendation / best practices in a scenario with latency sensitive VM ?
To simplify scenario.
Assuming we have 2 data centers DC1 and DC2. Each DC has ESXi host, and a local veeam repository backed up by standalone Linux host.
DC1 has VBR11 VM
DC2 has VM that acts as a veeam proxy
Connection between DC1 and DC2 is adequate to keep up with delta, but too slow to transfer all VMs data in a reasonable amount of time.
I would assume the most straightforward option would be to have a regular ESXi VM backup running on DC1 with backup copy job to copy to DC2.
But this approach seems to be very limited if whatever reason DC1 production and backup storage are gone and we have to restore VMs on DC2 side.
Entire VM recovery would not work (due to CPU reservation difference), Instant recover would not be possible, since it would mount vNFS on DC1/VBR11 side. Virtual disks would "kinda" work - but it would require manually create new VMs and restore disks for each VM individually...
So is there a better way to backup latency sensitive VMs to avoid all that pain in a "worst case scenario" ?