-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Linux agent recovery to different hardware
I've been noodling about with the Veeam Agent for Linux and love the backup! Where I'm losing a bit of my mind (and hair) is in trying a recovery to different hardware as a test of what I could do if all hell breaks loose. The machine that I'm backing up has LVM inside of MD volumes. The machine I'm trying to recover to is as basic as basic can be, single SATA drive. Overall. drive sizing is similar on the target recovery machine. I have tried every variation of a theme in terms of trying recovery options from the booted recovery ISO. I do seem to be able to get data and even the LVM to restore but no matter what I do, whatever I end up with on the recovery box ends up not being bootable.
The recovery documentation is thin at best ... does anyone have any advice or tricks that would help me with my task? Is it even possible to recover to different physical hardware? I suppose I could always convert backup to a VHD or VMDK and mount up in a virtual environment but I'd really like to be able to recover on bare metal.
Thanks for any comments or suggestions.
The recovery documentation is thin at best ... does anyone have any advice or tricks that would help me with my task? Is it even possible to recover to different physical hardware? I suppose I could always convert backup to a VHD or VMDK and mount up in a virtual environment but I'd really like to be able to recover on bare metal.
Thanks for any comments or suggestions.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Hi,
Thanks
I assume that the target machine stops booting and drops itself into dracut shell, is that correct? What is the last message that you see? Please also provide more info on original partitioning schema and the restored one.I do seem to be able to get data and even the LVM to restore but no matter what I do, whatever I end up with on the recovery box ends up not being bootable.
Thanks
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
PTide, thanks for your reply. I'll get this info posted as soon as possible although it may take a couple of days as I'm possibly away from the office. Depending on how I do the restore the machine attempts to boot and just flashes a cursor and "hangs" or I get a "no OS installed message".
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
PTide: OK,I have played around a bit more and I really don't get anywhere. I did try building a simple "one drive" Linux box on the weekend and tested Linux Agent against it and all worked as expected. SO I know my issue here is how to "map" from the backup of the system using LVM and MD back against the test box that is just using a single disk drive so no MD.
Partition table from the Veeam agent listing for the backed up system is as follows:
sda (boot) 223.5G
md0 (boot) 223.4G
md0p1 243..1M /boot (ext2)
md0p2 223.2G
md0p5 (lvm) 223.2G (LVM2_member)
voip-vg 223.2G
root 214.1G / (ext4)
swap_1 9.07G (swap)
The single disk drive on the target machine lists as sda 223.5G.
Hoope you can shed soem light for me as no matter what I do I can't seem to get a proper, bootable disk built on target machine.
Partition table from the Veeam agent listing for the backed up system is as follows:
sda (boot) 223.5G
md0 (boot) 223.4G
md0p1 243..1M /boot (ext2)
md0p2 223.2G
md0p5 (lvm) 223.2G (LVM2_member)
voip-vg 223.2G
root 214.1G / (ext4)
swap_1 9.07G (swap)
The single disk drive on the target machine lists as sda 223.5G.
Hoope you can shed soem light for me as no matter what I do I can't seem to get a proper, bootable disk built on target machine.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Did I get it right:
On your original system you have RAID (md0) made of one physical drive (sda). md0 is split in two partitions as follows:
md0p1 - /boot partition
md0p2 - partition serving as the only LVM physical volume (PV) for voip-vg volume group.
The voip volume group is split into 2 logical volumes: root, and swap_1
Is everything correct?
On your original system you have RAID (md0) made of one physical drive (sda). md0 is split in two partitions as follows:
md0p1 - /boot partition
md0p2 - partition serving as the only LVM physical volume (PV) for voip-vg volume group.
The voip volume group is split into 2 logical volumes: root, and swap_1
Is everything correct?
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
PTide:PTide wrote:Did I get it right:
On your original system you have RAID (md0) made of one physical drive (sda). md0 is split in two partitions as follows:
md0p1 - /boot partition
md0p2 - partition serving as the only LVM physical volume (PV) for voip-vg volume group.
The voip volume group is split into 2 logical volumes: root, and swap_1
Is everything correct?
Yes, that is correct. The original phsyical system is TWO physical SSD's in software RAID mirror (md0) which contains the drive (sda). For recovery test purposes I'm attempting to recover to a PC with a single physical drive so there would be no RAID (md0) possible.
Robert
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Let's start with checking general things. Since recovery media does not recreate raid during restore, my guess is that you need to check the following:
1. fstab /boot entry
On your original system boot is located on /dev/md0p1. On the restored box there is no such partition therefore boot process fails.
2. grub config
Your original "set root" probably looks like this:
There is also a chance that instead of md/0,1 (stands for md0p1) there is a UUID
You can try using grub-mkconfig to create new grub.cfg, it might be able to autodetect your single disk installation.
Please check those things and let me know if it works. if it does not, please provide all possible details about where it is stuck.
Thanks
1. fstab /boot entry
On your original system boot is located on /dev/md0p1. On the restored box there is no such partition therefore boot process fails.
2. grub config
Your original "set root" probably looks like this:
Code: Select all
set root='(md/0,1)'
You can try using grub-mkconfig to create new grub.cfg, it might be able to autodetect your single disk installation.
Please check those things and let me know if it works. if it does not, please provide all possible details about where it is stuck.
Thanks
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
Yes, that all makes sense. But it certainly points out a massive weakness with the product as it now sits. It's absolutely wonderful that it makes it dead easy to back up a physical Linux box. But as a recovery tool it is horribly lacking if it can't automate the bare metal recovery process. I realize that Linux is the "wild west" compared to Windows ... everyone does their "own thing" and how I've built my system may not have much in common with how you have built your system. However, that said, software RAID (md) and LVM are pretty much a part of the landscape so if you claim to produce a product that can backup then it should also have the ability to make BMR as simple as possible. In my case, I have been scouring the interwebs to find examples of Linux BMR processes to follow so that I can try and get something to work.
Sorry for the rant ... I'll continue to work away at this and if I get something that does work I'll post the steps. I can say that the Linux agent allows for "simple recoveries" with LVM but no software RAID (md) as I tested it and it worked well.
Sorry for the rant ... I'll continue to work away at this and if I get something that does work I'll post the steps. I can say that the Linux agent allows for "simple recoveries" with LVM but no software RAID (md) as I tested it and it worked well.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
I agree that we've overlooked something important and recovery of RAID setup requires way too much of manual intervention. What makes it even more sad, is the fact that the most complicated part of restore procedure is not automated. I would say that there are many people out there who don't grasp the boot process, judging by the amount of subject-related guides and questions on the Web that I had been seeing.
I can't promise that full-auto recovery for RAID setup will be delivered right away, however adding at least mdadm utilities into Recovery Media is on our roadmap. Regarding the "recovery instruction" in the user guide - I'm not sure it's a proper place for detailed instructions, but I think we should place there a link to KB describing the recovery steps for complicated setups. Thank you for bringing up this issue.
Btw, I wanted to ask you about the distro - which one do you use? AFAIK not all installation wizards allow to assemble raid with lvm atop of it, so I guess it must be either Debian/Suse, or you did that manually.
Thanks
I can't promise that full-auto recovery for RAID setup will be delivered right away, however adding at least mdadm utilities into Recovery Media is on our roadmap. Regarding the "recovery instruction" in the user guide - I'm not sure it's a proper place for detailed instructions, but I think we should place there a link to KB describing the recovery steps for complicated setups. Thank you for bringing up this issue.
Btw, I wanted to ask you about the distro - which one do you use? AFAIK not all installation wizards allow to assemble raid with lvm atop of it, so I guess it must be either Debian/Suse, or you did that manually.
Thanks
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
Debian. And, to be frank, I'm cursing having built it LVM on top of MD. My former boss beat it into me that this was the "best" way to build out a RAIDed Linux box and it kinda made sense based on my years in the Sun world where it was a normal occurence. But now I wish I had spent the money on a decent hardware RAID controller and just done LVM itself. When I do the restore I can see a PV with my root and swap file systems in it (just did the super simple layout) from a recovery CD. I do not have /boot anywhere in the restored disk. If I can figure out how to pull that PV into something "workable" then I can live with that short term. Sadly, the interwebs are full of BAD instructions so I haven;t found any good guides to this point. Any ideas?PTide wrote: Btw, I wanted to ask you about the distro - which one do you use? AFAIK not all installation wizards allow to assemble raid with lvm atop of it, so I guess it must be either Debian/Suse, or you did that manually.
Thanks
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Ok, here is what I tried in my lab:
Original VM: Debian 9
During installation I chose to install bootloader on sda device only.
lsblk output:
My config seems to be different than yours. Can you confirm that you don't have any partitions and your RAID just sits on raw devices (sda, sdb)?
Repository: SMB share
Recovery VM: identical, but with only one drive
In recovery media I chose to restore the whole sda from md0
After restore had finished I rebooted and saw only blinking cursor. I booted back into recovery media and did this:
Then I rebooted and the system was able to start. lsblk output:
Original VM: Debian 9
During installation I chose to install bootloader on sda device only.
lsblk output:
Code: Select all
root@debian:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 16G 0 disk
└─sda1 8:1 0 16G 0 part
└─md0 9:0 0 16G 0 raid1
├─md0p1 259:0 0 478.5M 0 md /boot
└─md0p2 259:1 0 15.5G 0 md
├─voip-root 253:0 0 9.3G 0 lvm /
└─voip-swap 253:1 0 6.2G 0 lvm [SWAP]
sdb 8:16 0 16G 0 disk
└─sdb1 8:17 0 16G 0 part
└─md0 9:0 0 16G 0 raid1
├─md0p1 259:0 0 478.5M 0 md /boot
└─md0p2 259:1 0 15.5G 0 md
├─voip-root 253:0 0 9.3G 0 lvm /
└─voip-swap 253:1 0 6.2G 0 lvm [SWAP]
sdc 8:32 0 16G 0 disk
sr0 11:0 1 1024M 0 rom
Repository: SMB share
Recovery VM: identical, but with only one drive
In recovery media I chose to restore the whole sda from md0
After restore had finished I rebooted and saw only blinking cursor. I booted back into recovery media and did this:
Code: Select all
pvscan
vgchange -ay
mkdir /mnt
mount /dev/mapper/voip-root /mnt
mount /dev/sda1 /mnt/boot
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
chroot /mnt
grub-install /dev/sda
Code: Select all
root@debian:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 478.5M 0 part /boot
└─sda2 8:2 0 15.5G 0 part
├─voip-root 254:0 0 9.3G 0 lvm /
└─voip-swap 254:1 0 6.2G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
This is my production box config:
Code: Select all
root@voip:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
└─sda1 8:1 0 223.6G 0 part
└─md0 9:0 0 223.5G 0 raid1
├─md0p1 259:0 0 243.1M 0 md /boot
├─md0p2 259:1 0 1K 0 md
└─md0p5 259:2 0 223.2G 0 md
├─voip--vg-root 253:0 0 214.1G 0 lvm /
└─voip--vg-swap_1 253:1 0 9.1G 0 lvm [SWAP]
sdb 8:16 0 223.6G 0 disk
└─sdb1 8:17 0 223.6G 0 part
└─md0 9:0 0 223.5G 0 raid1
├─md0p1 259:0 0 243.1M 0 md /boot
├─md0p2 259:1 0 1K 0 md
└─md0p5 259:2 0 223.2G 0 md
├─voip--vg-root 253:0 0 214.1G 0 lvm /
└─voip--vg-swap_1 253:1 0 9.1G 0 lvm [SWAP]
sdc 8:32 1 983.5M 0 disk
└─sdc1 8:33 1 983.4M 0 part
root@voip:~#
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
OK, I can do most of what you did. I have to sub sda5 for sda1 and I have to create directories in /mnt (boot, dev, proc, etc). I can execute chroot and can also execute grub-install but it then fails saying nothing in /dev. When I check in chroot shell /dev is empty (/mnt/dev). When I exit chroot then it is populated. So I am obviously missing something.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Well, that's really strange. Why would you substitute sda5 for sda1? Provided you had performed a direct restore from md0 (backup) to sda (new drive) you should have got:
sda1, sda2 (extended), and sda5 (logical), where sda1 is your /boot instead of former md0p1, sda2 is extended instead of former md0p2, sda5 logical is your /root instead of former md0p5.
I've recreated the exact same config (md0p2 extended containing md0p5 logical) and have performed the very same steps that I described above, and it worked
May I ask you to double check everything? If still no luck, then please PM me your email so I can pass it to support team in order to let them contact you and take a peek at what's actually going. It's not the easiest thing to do - to troubleshoot boot process via forum
Thanks
sda1, sda2 (extended), and sda5 (logical), where sda1 is your /boot instead of former md0p1, sda2 is extended instead of former md0p2, sda5 logical is your /root instead of former md0p5.
I've recreated the exact same config (md0p2 extended containing md0p5 logical) and have performed the very same steps that I described above, and it worked
This could have happened if you had skipped the mount --bind /dev /mnt/dev part.I can execute chroot and can also execute grub-install but it then fails saying nothing in /dev. When I check in chroot shell /dev is empty (/mnt/dev).
Right, because when you exit chroot you get back to the recovery media environment that has knowledge about the hardware.When I exit chroot then it is populated.
May I ask you to double check everything? If still no luck, then please PM me your email so I can pass it to support team in order to let them contact you and take a peek at what's actually going. It's not the easiest thing to do - to troubleshoot boot process via forum
Thanks
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Apr 04, 2018 4:06 pm
- Full Name: Robert Dick
- Location: Victoria, BC CANADA
- Contact:
Re: Linux agent recovery to different hardware
OK, I completely redid the restorre ensuring I go from md0 to sda. I then booted up off recovery ISO and confirmed layout is as you describe. I followed all the steps and still get a failure with the grub install, this time the error says cannot find a device for /boot/grub. I have a feeling that something is being missed once I am in the chroot environment but I'm not sure what. But at least I'm much closer than I was!
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Robert,
I fixed a typo in my initial instruction in my post above (sent you PM with details). Sorry for confusion.
Thanks
I fixed a typo in my initial instruction in my post above (sent you PM with details). Sorry for confusion.
Thanks
-
- Novice
- Posts: 7
- Liked: never
- Joined: Apr 26, 2018 1:12 pm
- Full Name: Austin Peters
- Contact:
[MERGED] P2V linux agent
Hello,
I am attempting to move an application server p2v from a veeam backup. We currently have the veeam agent on the physical server and set to do a whole system backup. When i attempt the restore with a veeam linux iso recovery on the vm side it goes though and successfully recovers but when i attempt to boot it just freezes up. Is there tweaks i have to make in a rescue mode that would make this vm work? This is an RHEL 6.9 Physical machine.
Also other questions i have is can this be restored on a baremetal vm with nothing on it? or does it already have to have a RHEL 6.9 image installed?
I've have had no luck so far and have tried dozens of time to try every scenario i could think of. Even tried converting the drives to vmdks and attaching them to the VM. That way seemed the most time consuming, so i been trying to stick with the veeam recovery iso.
I am attempting to move an application server p2v from a veeam backup. We currently have the veeam agent on the physical server and set to do a whole system backup. When i attempt the restore with a veeam linux iso recovery on the vm side it goes though and successfully recovers but when i attempt to boot it just freezes up. Is there tweaks i have to make in a rescue mode that would make this vm work? This is an RHEL 6.9 Physical machine.
Also other questions i have is can this be restored on a baremetal vm with nothing on it? or does it already have to have a RHEL 6.9 image installed?
I've have had no luck so far and have tried dozens of time to try every scenario i could think of. Even tried converting the drives to vmdks and attaching them to the VM. That way seemed the most time consuming, so i been trying to stick with the veeam recovery iso.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Hi Austin,
I've merged your post into the existing thread regarding restore issues to another hardware. Although initially the thread was dedicated to discuss restore operation to another hypervisor, the issue that you experience is likely to be related to kernel initramfs image not having necessary drivers, which may occur during P2V migration/restore as well.
Here is KB with instructions. Please review the thread, try the instruction I've provided, and feel free to ask additional questions should you have any.
Thanks
I've merged your post into the existing thread regarding restore issues to another hardware. Although initially the thread was dedicated to discuss restore operation to another hypervisor, the issue that you experience is likely to be related to kernel initramfs image not having necessary drivers, which may occur during P2V migration/restore as well.
Here is KB with instructions. Please review the thread, try the instruction I've provided, and feel free to ask additional questions should you have any.
Thanks
-
- Novice
- Posts: 7
- Liked: never
- Joined: Apr 26, 2018 1:12 pm
- Full Name: Austin Peters
- Contact:
Re: Linux agent recovery to different hardware
My direct issue ended up being the lvm.conf filters. I just knew this because i came across a similar issue when i was building out the crash kernel. Once i changed the filters and ran a dracut -f i was able to successfully boot the virtual server.
Other cleanup included getting rid of bonds and installing vmware tools. Other then that it was easy once i got past the first issue.
Other cleanup included getting rid of bonds and installing vmware tools. Other then that it was easy once i got past the first issue.
-
- Novice
- Posts: 7
- Liked: never
- Joined: Apr 26, 2018 1:12 pm
- Full Name: Austin Peters
- Contact:
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Linux agent recovery to different hardware
Thank you for sharing, we will add that to KB as well.
-
- Enthusiast
- Posts: 53
- Liked: 3 times
- Joined: Dec 30, 2016 4:10 pm
- Full Name: Caterine Kieffer
- Contact:
[MERGED] Re: Agent Restore to VMWare
Just to add more, I tried by creating a diskless VM, then used VEEAM to restore as .vmdk disks.
the only oddity was during the restore it indicated
Disk 0 (2.7 TB): 141.0 MB restored at 38MB/s
Disk 1 (1.1 TB): ...This part hadn't finished yet when I snipped a picture...
My physical server only has the 1 2.7TB physical disk partitioned into 3 partitions.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
│ ├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
│ ├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
│ ├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
│ ├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
│ ├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
│ ├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
│ ├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
│ ├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
│ └─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─sda3 8:3 0 1.4T 0 part
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
sr0 11:0 1 1024M 0 rom
The VM seems to be booting ok. Still working on cleaning up some things so have booted it to single user mode off the network till done since it is just a test so don't want to conflict with production.
Will have to try the .iso recovery image another time to see if the disk sizing looks more accurate.
the only oddity was during the restore it indicated
Disk 0 (2.7 TB): 141.0 MB restored at 38MB/s
Disk 1 (1.1 TB): ...This part hadn't finished yet when I snipped a picture...
My physical server only has the 1 2.7TB physical disk partitioned into 3 partitions.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
│ ├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
│ ├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
│ ├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
│ ├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
│ ├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
│ ├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
│ ├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
│ ├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
│ └─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─sda3 8:3 0 1.4T 0 part
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
sr0 11:0 1 1024M 0 rom
The VM seems to be booting ok. Still working on cleaning up some things so have booted it to single user mode off the network till done since it is just a test so don't want to conflict with production.
Will have to try the .iso recovery image another time to see if the disk sizing looks more accurate.
-
- Enthusiast
- Posts: 53
- Liked: 3 times
- Joined: Dec 30, 2016 4:10 pm
- Full Name: Caterine Kieffer
- Contact:
Re: Agent Restore to VMWare
Added it to the network and the new disk layout looks like this:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
└─sda3 8:3 0 1.4T 0 part
sdb 8:16 0 1.1T 0 disk
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
I get warnings about the 2nd disk being much smaller than the OS thinks it is which is to be expected given the difference between the original server and this attempted restore to VM
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
├─sda2 8:2 0 1.4T 0 part
└─sda3 8:3 0 1.4T 0 part
sdb 8:16 0 1.1T 0 disk
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
I get warnings about the 2nd disk being much smaller than the OS thinks it is which is to be expected given the difference between the original server and this attempted restore to VM
-
- Enthusiast
- Posts: 53
- Liked: 3 times
- Joined: Dec 30, 2016 4:10 pm
- Full Name: Caterine Kieffer
- Contact:
Re: Agent Restore to VMWare
I wanted to add a little bit more. I forgot to mention the server is a CentOS EL6. Creating this VM will allow me to use it as a test system to test updating it to the latest kernel and patches.
As well as application modifications before pushing to production.
I did boot off of a CentOS .iso and ran the troubleshooter. I was able to delete the sda2 and sda3 partitions and create a 2TB sda2 partition. then ran dd from /dev/sdb to /dev/sda2.
That at least allowed me to get rid of the second disk.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
└─sda2 8:2 0 2T 0 part
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
As well as application modifications before pushing to production.
I did boot off of a CentOS .iso and ran the troubleshooter. I was able to delete the sda2 and sda3 partitions and create a 2TB sda2 partition. then ran dd from /dev/sdb to /dev/sda2.
That at least allowed me to get rid of the second disk.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 600M 0 part /boot
└─sda2 8:2 0 2T 0 part
├─vg00-root (dm-0) 253:0 0 6G 0 lvm /
├─vg00-swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
├─vg00-tmp (dm-2) 253:2 0 2G 0 lvm /tmp
├─vg00-afscache (dm-3) 253:3 0 2G 0 lvm /usr/vice/cache
├─vg00-usr (dm-4) 253:4 0 7G 0 lvm /usr
├─vg00-var (dm-5) 253:5 0 3G 0 lvm /var
├─vg00-home (dm-6) 253:6 0 2G 0 lvm /home
├─vg00-stagearea (dm-7) 253:7 0 30G 0 lvm /stage-area
├─vg00-lvol1 (dm-8) 253:8 0 200G 0 lvm /lvol1
└─vg00-lvol2 (dm-9) 253:9 0 860G 0 lvm /lvol2
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Linux agent recovery to different hardware
Not surprised you ran into issues. All of my Linux systems (except the JBOD/NAS) are virtual, so there is no advantage to using MD raid or LVM, so I always create them with simple partitions.
-
- Enthusiast
- Posts: 53
- Liked: 3 times
- Joined: Dec 30, 2016 4:10 pm
- Full Name: Caterine Kieffer
- Contact:
Re: Linux agent recovery to different hardware
Interesting. VM or not I have found lvm helpful. I work in a development environment, but even in production I would think it makes management easier. I do prefer hardware RAID though am familiar with the arguments from those that prefer MD RAID.
It is more difficult sometimes to deal with OS issues that crop up from running out of space in / vs just application issues that occur when they run out of space in their own file system leaving the OS intact and still functioning correctly. Production probably grows more slowly so easier to plan changes, but with developers, they can eat up a lot of space in a very short time for some reason or another.
Plus with the way our developers work they test something which suddenly becomes production, they hate down time so adding another disk to a vg and extending a file system is faster/easier vs taking the VM down to regrow the / partition.
I am not familiar with all the fancy features of VMWare, ours is a fairly basic environment conjured up from old repurposed servers. VCenter came a bit after there were several hosts and then of course grew as more repurposed servers became hosts.
It is more difficult sometimes to deal with OS issues that crop up from running out of space in / vs just application issues that occur when they run out of space in their own file system leaving the OS intact and still functioning correctly. Production probably grows more slowly so easier to plan changes, but with developers, they can eat up a lot of space in a very short time for some reason or another.
Plus with the way our developers work they test something which suddenly becomes production, they hate down time so adding another disk to a vg and extending a file system is faster/easier vs taking the VM down to regrow the / partition.
I am not familiar with all the fancy features of VMWare, ours is a fairly basic environment conjured up from old repurposed servers. VCenter came a bit after there were several hosts and then of course grew as more repurposed servers became hosts.
Who is online
Users browsing this forum: No registered users and 5 guests