Linux agent recovery to different hardware

Backup agent for Linux servers and workstations on-premises or in the public cloud

Re: Linux agent recovery to different hardware

Veeam Logoby PTide » Mon Apr 23, 2018 6:43 pm

Robert,

I fixed a typo in my initial instruction in my post above (sent you PM with details). Sorry for confusion.

Thanks
PTide
Veeam Software
 
Posts: 3905
Liked: 330 times
Joined: Tue May 19, 2015 1:46 pm

[MERGED] P2V linux agent

Veeam Logoby biohazard156 » Thu Jun 07, 2018 9:39 pm

Hello,

I am attempting to move an application server p2v from a veeam backup. We currently have the veeam agent on the physical server and set to do a whole system backup. When i attempt the restore with a veeam linux iso recovery on the vm side it goes though and successfully recovers but when i attempt to boot it just freezes up. Is there tweaks i have to make in a rescue mode that would make this vm work? This is an RHEL 6.9 Physical machine.

Also other questions i have is can this be restored on a baremetal vm with nothing on it? or does it already have to have a RHEL 6.9 image installed?

I've have had no luck so far and have tried dozens of time to try every scenario i could think of. Even tried converting the drives to vmdks and attaching them to the VM. That way seemed the most time consuming, so i been trying to stick with the veeam recovery iso.
biohazard156
Novice
 
Posts: 5
Liked: never
Joined: Thu Apr 26, 2018 1:12 pm
Full Name: Austin Peters

Re: Linux agent recovery to different hardware

Veeam Logoby PTide » Fri Jun 08, 2018 12:33 pm

Hi Austin,

I've merged your post into the existing thread regarding restore issues to another hardware. Although initially the thread was dedicated to discuss restore operation to another hypervisor, the issue that you experience is likely to be related to kernel initramfs image not having necessary drivers, which may occur during P2V migration/restore as well.

Here is KB with instructions. Please review the thread, try the instruction I've provided, and feel free to ask additional questions should you have any.

Thanks
PTide
Veeam Software
 
Posts: 3905
Liked: 330 times
Joined: Tue May 19, 2015 1:46 pm

Re: Linux agent recovery to different hardware

Veeam Logoby biohazard156 » Fri Jun 08, 2018 8:56 pm

My direct issue ended up being the lvm.conf filters. I just knew this because i came across a similar issue when i was building out the crash kernel. Once i changed the filters and ran a dracut -f i was able to successfully boot the virtual server.
Other cleanup included getting rid of bonds and installing vmware tools. Other then that it was easy once i got past the first issue.
biohazard156
Novice
 
Posts: 5
Liked: never
Joined: Thu Apr 26, 2018 1:12 pm
Full Name: Austin Peters

Re: Linux agent recovery to different hardware

Veeam Logoby biohazard156 » Fri Jun 08, 2018 8:57 pm

biohazard156
Novice
 
Posts: 5
Liked: never
Joined: Thu Apr 26, 2018 1:12 pm
Full Name: Austin Peters

Re: Linux agent recovery to different hardware

Veeam Logoby PTide » Sat Jun 09, 2018 10:56 am

Thank you for sharing, we will add that to KB as well.
PTide
Veeam Software
 
Posts: 3905
Liked: 330 times
Joined: Tue May 19, 2015 1:46 pm

[MERGED] Re: Agent Restore to VMWare

Veeam Logoby CatSpirent » Tue Jun 12, 2018 4:51 pm

Just to add more, I tried by creating a diskless VM, then used VEEAM to restore as .vmdk disks.
the only oddity was during the restore it indicated
Disk 0 (2.7 TB): 141.0 MB restored at 38MB/s
Disk 1 (1.1 TB): ...This part hadn't finished yet when I snipped a picture...

My physical server only has the 1 2.7TB physical disk partitioned into 3 partitions.

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
│ ├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
│ ├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
│ ├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
│ ├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
│ ├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
│ ├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
│ ├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
│ ├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
│ └─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─sda3 8:3 0 1.4T 0 part
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
sr0 11:0 1 1024M 0 rom



The VM seems to be booting ok. Still working on cleaning up some things so have booted it to single user mode off the network till done since it is just a test so don't want to conflict with production.
Will have to try the .iso recovery image another time to see if the disk sizing looks more accurate.
CatSpirent
Enthusiast
 
Posts: 32
Liked: 2 times
Joined: Fri Dec 30, 2016 4:10 pm
Full Name: Caterine Kieffer

Re: Agent Restore to VMWare

Veeam Logoby CatSpirent » Tue Jun 12, 2018 5:37 pm

Added it to the network and the new disk layout looks like this:

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
└─sda3 8:3 0 1.4T 0 part
sdb 8:16 0 1.1T 0 disk
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2

I get warnings about the 2nd disk being much smaller than the OS thinks it is which is to be expected given the difference between the original server and this attempted restore to VM
CatSpirent
Enthusiast
 
Posts: 32
Liked: 2 times
Joined: Fri Dec 30, 2016 4:10 pm
Full Name: Caterine Kieffer

Re: Agent Restore to VMWare

Veeam Logoby CatSpirent » Wed Jun 13, 2018 3:01 pm

I wanted to add a little bit more. I forgot to mention the server is a CentOS EL6. Creating this VM will allow me to use it as a test system to test updating it to the latest kernel and patches.
As well as application modifications before pushing to production.

I did boot off of a CentOS .iso and ran the troubleshooter. I was able to delete the sda2 and sda3 partitions and create a 2TB sda2 partition. then ran dd from /dev/sdb to /dev/sda2.
That at least allowed me to get rid of the second disk.

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
└─sda2 8:2 0 2T 0 part
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
CatSpirent
Enthusiast
 
Posts: 32
Liked: 2 times
Joined: Fri Dec 30, 2016 4:10 pm
Full Name: Caterine Kieffer

Re: Linux agent recovery to different hardware

Veeam Logoby danswartz » Wed Jun 13, 2018 5:43 pm

Not surprised you ran into issues. All of my Linux systems (except the JBOD/NAS) are virtual, so there is no advantage to using MD raid or LVM, so I always create them with simple partitions.
danswartz
Expert
 
Posts: 116
Liked: 13 times
Joined: Fri Apr 26, 2013 4:53 pm
Full Name: Dan Swartzendruber

Re: Linux agent recovery to different hardware

Veeam Logoby CatSpirent » Wed Jun 20, 2018 3:25 pm

Interesting. VM or not I have found lvm helpful. I work in a development environment, but even in production I would think it makes management easier. I do prefer hardware RAID though am familiar with the arguments from those that prefer MD RAID.

It is more difficult sometimes to deal with OS issues that crop up from running out of space in / vs just application issues that occur when they run out of space in their own file system leaving the OS intact and still functioning correctly. Production probably grows more slowly so easier to plan changes, but with developers, they can eat up a lot of space in a very short time for some reason or another.

Plus with the way our developers work they test something which suddenly becomes production, they hate down time so adding another disk to a vg and extending a file system is faster/easier vs taking the VM down to regrow the / partition.

I am not familiar with all the fancy features of VMWare, ours is a fairly basic environment conjured up from old repurposed servers. VCenter came a bit after there were several hosts and then of course grew as more repurposed servers became hosts.
CatSpirent
Enthusiast
 
Posts: 32
Liked: 2 times
Joined: Fri Dec 30, 2016 4:10 pm
Full Name: Caterine Kieffer

Previous

Return to Veeam Agent for Linux



Who is online

Users browsing this forum: No registered users and 2 guests