Standalone backup agents for Linux, Mac, AIX & Solaris workloads on-premises or in the public cloud
Post Reply
aj_potc
Expert
Posts: 150
Liked: 37 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Problem booting VM after restoration on another hypervisor

Post by aj_potc »

Hello,

I'm currently evaluating the Linux Agent for backing up several CentOS 7 VMs, but I'm having trouble booting these VMs after performing a restoration. What I'd like to find out is whether my intended use of Veeam is supported, or if I'm trying to do something that the software was not designed to do.

The VMs in question are currently running under VMWare virtualization at a manged hosting provider, and I don't have access to the hypervisor itself. However, since I do have root access to each VM, I can successfully install and run the Veeam Linux agent to create backups. This process completes with no trouble, and I've created backups of each system and saved them remotely via NFS.

My ultimate goal is to be able to restore the VMs to another hypervisor in a disaster recovery scenario, and this is where I'm having problems. As part of my testing, I've restored each of my three VMs to a different hypervisor platform (VirtualBox). To try to get them running, I've completed the following steps:
  • 1. Created a fresh VM for each system to be restored
    2. Booted into the Veeam Linux recovery ISO and did a full disk recovery (which creates the various partitions on the system to match the original configuration)
    3. Booted into the CentOS 7 ISO and entered rescue mode, mounted the partitions, and entered a chroot jail
    4. Ran dracut to regenerate the initramfs based on the current kernel version: dracut /boot/initramfs-$(uname -r).img $(uname -r)
    5. Rebooted
After doing this, I get different results for each of my three VMs:
  • One system boots and runs fine.
  • One system boots and allows login for regular users, but will not allow root to login. It also won't allow a regular user to su to root. Any attempt to login as root causes a PAM error message in /var/log/audit: "requirement "uid >= 1000" not met by user 'root'" . (This error is not present on the original system)
  • One system will not boot at all, with dracut displaying the error "Warning: Could not boot. Warning: /dev/disk/by-uuid/[UUID] does not exist"
So, it's clear that simply regenerating the initramfs isn't enough. My research on this hasn't uncovered anything else useful to try.

I have opened a support case for this issue (Case #02683615), but wanted to check here just in case anyone had any feedback. Does anyone know if there's a way to restore VMs to a different hypervisor and get a working system?

Thank you very much for any guidance.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Hi,

I assume the VMs are somehow different, would you provide some details about hardware and disks layout? Regarding the third VM - have you checked fstab?

Thanks
aj_potc
Expert
Posts: 150
Liked: 37 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by aj_potc »

Thanks for your reply.

Hardware-wise, I don't know if the differences are very significant. The systems have different amounts of memory and disk space assigned to them. But the configurations are about as standard as can be:
  • one /boot partition
    one root partition (/)
    one swap partition
There is no LVM or software RAID, and no other devices mounted. All three VMs were installed at the same time based on OS templates from the datacenter, so this makes me think that they are about as close to identical as you can get.

Regarding fstab, yes, I have checked this. However, I believe the problem is somehow related to the Grub2 config, which is complaining about the missing device. That's where I'm being dropped to the dracut prompt. I've checked that the device UUIDs for each partition shown by blkid match what's in both fstab and the Grub2 config (/boot/grub2/grub.conf). I even tried regenerating the Grub2 config with grub-mkconfig. However, still no joy on getting the system to boot. It continues to complain about not finding a device by UUID that I believe to exist.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Then kindly keep working with the support engineer and let us know about the outcome.

Meanwhile, I have a few more questions just to summarize what we have here:

1. Is there any chance that the initial template had been updated by the provider before you deployed the 2nd and the 3rd VMs?
2. The restored system can actually see the drive where the "missing" partition resides at, and the partition can be manually mounted via dracut by its device file in /dev, right?
3. Have you tried placing the exact version of the required kernel when generating initramfs, instead of using uname -r?

Thank you
aj_potc
Expert
Posts: 150
Liked: 37 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by aj_potc »

Thanks for your questions. I have closed the case with support. Unfortunately we weren't able to successfully resolve any of the problems related to restoring the VMs to a different hypervisor.

I've learned that this kind of migration (so-called V2V), is much more difficult than I first expected. Apparently, the VMs are not "plain vanilla" Linux installations that can be migrated easily. They must include some customizations related to the original hypervisor, VMWare. These modifications cause strange bugs when the systems are booted under different virtualization technology (if you can get them to boot at all). So far I have tried restoring to XenServer, Hyper-V, and VirtualBox. All of these experience various strange problems.

The only successful restoration I have tried is by restoring to VMWare Player, which closely mimics the original hypervisor (VMWare ESXi).

To answer your questions:

1. No, I find it very unlikely that the OS template of the provider was modified. This is a large provider that relies on an automated deployment system, so I don't believe there's a chance of this.

2. The non-booting system's root partition can be mounted successfully when I boot from the CentOS ISO into a recovery session. However, when I try to boot it normally, it drops me to the dracut prompt. Inside this environment, the partition can't be found in /dev. In fact, it appears that no disks can be found there other than a Linux LiveCD I have attached to the VM, which appears as a virtual CD drive.

3. Yes, I have tried placing the exact kernel version in my dracut command to rebuild the initramfs. Unfortunately this doesn't help.

Thanks very much for any further suggestions or thoughts.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Thanks for getting back to us with that info. Since it does not see the disks at all, it makes me think that it might be related to the types of storage controllers that you have on original VMs and on the restores VMs, do they differ? On the other hand, regenerating initrd should have resolved that. Would you kindly try checking that the new initramfs contains the appropriate driver? For example:

Code: Select all

lsinitrd test.img | grep ahci
Thanks
aj_potc
Expert
Posts: 150
Liked: 37 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by aj_potc » 1 person likes this post

Thanks for the reply.

I'm not aware of what actual storage controllers are used on the original host system; I would imagine that it uses some type of enterprise SAN. Does this make a difference? I thought that the hypervisor (no matter which one) would present the storage to the guest VMs in a generic way, since the VM is interacting with the hypervisor and not directly with the underlying storage hardware. Or do I misunderstand how they function?

I booted into a rescue environment again and ran the command you listed against the initramfs file I regenerated earlier. It produces no results. I also tried running it against several older kernel versions, but this also didn't turn up anything.

Your idea about a missing driver made me ask the question: "Why can the rescue system see the storage, but the kernels built for the production system can't?" So I tried to grep the initramfs img file for the recovery system, and, sure enough, I do get results for ahci. So, I think you're on to something here.

Do you have any idea how I would go about adding the missing drivers? Or perhaps you have an idea why regenerating the initrd would not automatically include them?

Thanks again for your help. I think we are finally getting to the root of the problem.
aj_potc
Expert
Posts: 150
Liked: 37 times
Joined: Mar 17, 2018 12:43 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by aj_potc » 1 person likes this post

I believe I've answered the questions I raised in the previous post after a little research.

I tried regenerating the initrd once again, but this time I explicitly requested that the AHCI kernel module be included. Here's the command I used for my specific kernel version:

Code: Select all

dracut -f --add-drivers ahci /boot/initramfs-3.10.0-693.21.el7.x86_64.img 3.10.0-693.21.el7.x86_64
So far, I've been able to fix the issues with both of the broken VMs after restoring them to a new hypervisor. I haven't done extensive testing yet, but I'm feeling quite sure that this was the fix I was looking for.

Just to summarize the problem and solution:

Apparently, VMWare ESXi virtual machines don't include the AHCI driver built into their kernel. This means that any restoration of those systems onto a different hypervisor can fail due to boot or disk access problems. However, if you manually regenerate the initrd on those systems and specify that the AHCI module be added, then this fixes the problem.

Thank you very much, PTide, for steering me to the right answer!
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Glad that you've nailed it! We'll probably consider automating this process in future. Regarding your question about md raid - yes, we have that on our radar too.

Thank you
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by tsightler »

aj_potc wrote:I'm not aware of what actual storage controllers are used on the original host system; I would imagine that it uses some type of enterprise SAN. Does this make a difference? I thought that the hypervisor (no matter which one) would present the storage to the guest VMs in a generic way, since the VM is interacting with the hypervisor and not directly with the underlying storage hardware. Or do I misunderstand how they function?
The hypervisor does present storage controllers to the guest VMs in a generic way, regardless of the underlying host hardware, but hypervisors from different vendors do use different approaches. For example, VMware typically emulates an LSI SCSI/SAS controller or, if configured, a VMware specific paravirtual SCSI controller, which is really just a special, lightweight device so that the hypervisor doesn't have to emulate a physical SCSI controller. On the other hand, many other hypervisors emulate some type of SATA controller instead, although some can do either.

Based on what your hypervisor presents, you may need to modify the initrd image to include the appropriate SATA/SCSI controller when you switch them, as you have discovered. Outside of that, restoring across hypervisors should largely be possible, although other hardware issues might occur as well, network adapters for example, might require some modification to come up since the MAC addresses will change and the device names might change as well based on how the hypervisor provides emulated hardware for the VM.

Glad you were able to get it working!
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

[MERGED] CentOS with LVM restore not working

Post by claudio.rigolio »

Hi,
I'm trying to make some test of backup and restore with Cloud VM with CentOS and LVM, and everything is ok until I try to restore into VMware VM just for testing.
After a complete restore I reboot my VM and I always have "dracut" console with "Warning: /dev/root does not exist"...

I did a full VM backup with Veeam Agent Server for Linux and a complete VM restore without any error.

What I am missing?

Thanks in advance guys
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Hi,

Cross-hypervisor migration/restore might require you to rebuild initram.

In rescue shell execute:

Code: Select all

dracut -f /boot/initramfs-<your kernel version>.img <your kernel version>
and try to boot again.

Thanks
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by claudio.rigolio »

Hi,
sorry for my delay but I was sick.
I made some other test...but without any luck.

The last one was as follows:
1- created new CentOS 7 VM on ESXi with default LVM configuration:
Image

2-updated and installed veeam agent:

Code: Select all

yum update
yum install veeam-release-el7-1.0-1.x86_64.rpm
yum update
reboot

yum install veeam
reboot
3- executed entire VM backup into shared folder without any problem

4- created new VM into same ESXi with single vmdk and mounted veeam recovery media

5- booted veeam recovery and restored entire volumes selecting "sda (boot)" from backup as follows:
Image

6- reboot VM and this is the screen:
Image

As you can see "dracut" is a command not found....

What am I missing?

Thanks
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by claudio.rigolio »

I just tried also with this steps:
1. Pick sda, choose "restore whole disk from ...", pick sda from the backup.
2. Pick sda2 and choose "create LVM physical volume". Enter VG name. Make sure that it has the same name as in the backup you're restoring from.
3. Pick free space inside the VG, choose to "restore volume from ...", pick root LV from the backup. Perform the same for the remaining free space and swap LV. Start restore.

But no changes...
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by claudio.rigolio »

I made an other test re-executing everything on vmware workstation on my pc and the restore worked!
I also tried a restore from the VM from workstation, to ESXi, and it worked too!
So the problem is the backup execution into ESXi VMs agent!

How can I add all drivers installed into VM to the backup?

Thanks
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by claudio.rigolio »

Here the drivers from the Workstation VM:
Image

Here from the ESXi 6.0 VM:
Image
Image

The real difference is about the Hard Drive Controller:

Workstation: 00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
ESXi 6.0: 03:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI Controller (rev 02)

If I try to restore from the workstation to the ESXi default LSI Lofic Paraller, it works.

Image
If I try to restore from the workstation to an other ESXi 6.5 that uses the VMware Paravirtual, it doesn't work.

In any case, if I try to restore from teh ESXi 6.5 backup image to the same ESXi 6.0 or to ESXi 6.5 it doesn't work.

I discovered this problem because I tried to restore from a cloud VM backup, to ESXi 6.0 without any luck.
Here there are the drivers of the cloud VM:
Image

My conclusion is that it's impossible to restore this last VM into ESXi or other hypervisors.

Any help is kindly appreciated.
Thanks
claudio.rigolio
Novice
Posts: 9
Liked: never
Joined: Mar 15, 2017 8:49 am
Full Name: Claudio Rigolio
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by claudio.rigolio »

Ok, after other tests I managed to start the restored VM into rescue mode and from there executing:
dracut -f /boot/initramfs-3.10.0-693.21.1.el7.x86_64.img 3.10.0-693.21.1.el7.x86_64
rebooting and then it works.

The strange thing is that this is needed only when I manage ESXi VMs....
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Problem booting VM after restoration on another hypervis

Post by PTide »

Hi Claudio and sorry for the late response,
The strange thing is that this is needed only when I manage ESXi VMs....
I'm not sure what you mean, would you elaborate on that please? Does that mean that you see the issue only with VMware VMs while everything works fine with Hyper-V VMs?

Thanks
Post Reply

Who is online

Users browsing this forum: No registered users and 11 guests