Support case id# 04234200
I have developed a procedure for recovering an entire OS RAID1 system to new bare metal that works.
It takes a bit of work but it's not that hard. Any suggestions for improvement are welcome. Presented as is. I've run through this procedure on two systems and it works well.
Veeam RAID System Recovery Steps
================================
The Veeam recovery disk will not directly restore a RAID mirror or other RAID system to new
drive(s). It WILL create a non-RAID single drive sda1, sda2, etc bootable system in most
cases from a Veeam RAID image, but newer Redhat, Centos, Ubuntu, etc... may STILL not boot on a
single drive system without extra steps taken to correct the boot loader and initramfs
for changes in the hardware and RAID devices.
Procedure Summary - RAID1 Mirrored
Suggestion for all servers/workstations - Run fdisk -l /dev/sda, etc for OS and
other important drives > /root/drivepart.txt or some such so it's on the backup images.
Then you can retrive it using File Recovery Veeam Menu option and drop to prompt.
Partitions mounted on /mnt/backup. Simple cat /mnt/backup/root/drivepart.txt to get drive partition info.
1. Target server has drive(s) installed.
2. Boot with Veeam Recovery disk 3.x or 4.x.
3. Exit to # prompt and partition disk using start/end blocks from source server as
type "fd" or RAID. STICKY WICKET PART if you don't have the partion table data.
4. Use mdadm to create the raid devices with only 1 device.
5. Re-enter Veeam Menu and recover by volume.
6. Restore bootloader from saved images of sda or sdb.
7. Restore whole disk of each source RAID > destination RAID, ignoring sda1, sda2, ...
entries on target (left-hand) side.
8. Commence restore and see if it boots. Most likely drops out at grub-rescue prompt.
9. Boot from Centos Fedora / Redhat / Ubuntu installation disk, selecting
Troubleshooting>>Rescue a system.
10. Perform steps for chroot.
11. Translate the raid UUID#s from the new created RAID partitions BOOT, ROOT and SWAP. Replace in
/etc/default/grub and edit /boot/grub2/grub.cfg CMDLINEs.
12. Re-generate initramfs and install new grub boot loader on sda.
13. Should now have a working system.
Procedure
=========
1. I recommend that you work with 1 drive and later add the second drive when the restore has completed
and successfully booted.
2. Boot with Veeam Recovery disk 3.x or 4.x. Exit to command prompt.
3. Partition the target drive with the same or slightly larger partitions than the original.
a. Change the partition type to "fd"" Linux RAID for all raided partions. Write changes to disk.
b. Use mdadm to create the RAID devices. EG: mdadm --create /dev/md0 -l raid1 -f -n 1 /dev/sda1 and so on
for each partition to be RAIDED.
(I had issues making them the exact size as shown on source disk server. Veeam agent said not enough
space so I had to use mdadm to stop md devices and zero the superblocks of individual sdx partitons
to re-partition the drive, adding some blocks to increase size. Remember to take into
account any increase in calculating successive partitions.)
c. Copy the RAID partitons UUIDs to have later which you will need to edit the grub.cfg files.
mdadm --detail --scan. Take a photo, put into a file that you can copy or write them down.
you will need the ROOT or "/", /BOOT and SWAP partiton UUID numbers.
4. Type "exit" to return to the Recovery Menu and Select Volume Recovery.
a. Select source and date to recover from.
b. Look at Source (Image) side and select SDA(preferred) or SDB bootloader, press enter and select restore to SDA.
c. Raid partitions - Select from right hand side pressing enter and Restore Whole Disk To select the
MD device that it should restore to ignoring all the sdax entries.
d. Select "S" to restore, review entries and press enter key to start the recovery.
5. Select Reboot from main Veeam Menu to restart system to see if it boots. You will most likely be
stopped at a grub-rescue prompt showing an error EG:
Error: disk 'md-uuid-30bc0880:dbcb8250:8a4c1367:727d43fa' not found. This was the UUID of the original source
/boot RAID partition. Examination showed that the /boot/grub2/grub.cfg file had the boot, root and swap
RAID UUIDs embedded in the cmdline(s) (each kernel) for startup.
a. Re-start the server and insert a copy of the Centos /Redhat / Fedora /Ubuntu installation disk
select the Rescue a System entry. For Centos it's Troubleshooting >> Rescue a System.
b. Select (1) Continue from the Rescue Menu and then press enter after reading the info there.
c. Verify that your RAID partitions are seen. cat /proc/mdstat You should see all md devices running.
d. Change Root - Centos 7/Redhat 7/Newer Fedora needs an extra command. For CentOS 7 and multipathed
root ('/') issue the following before chroot-ing to '/mnt/sysimage':
1. mount --bind /run /mnt/sysimage/run
2. systemctl start multipathd.service
3. chroot /mnt/sysimage
You should now be able to ls -l the recovered filesytems in their semi-normal state.
e. Discover the old MD device top UUID numbers that need to be replaced. cat /etc/mdadm.conf EG:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/1_0 level=raid1 num-devices=2 UUID=30bc0880:dbcb8250:8a4c1367:727d43fa
ARRAY /dev/md/colossus.schoolpathways.com:2 level=raid1 num-devices=2 UUID=9cdf44f4:b1f0cf5f:d718cf64:2105f1ca
ARRAY /dev/md/colossus.schoolpathways.com:3 level=raid1 num-devices=2 UUID=a3e74d07:bd3213ca:f38e1841:a62e282b
ARRAY /dev/md/colossus.schoolpathways.com:5 level=raid1 num-devices=2 UUID=87da949a:4ffa26f7:51455e52:e97e14eb
ARRAY /dev/md/colossus.schoolpathways.com:6 level=raid1 num-devices=2 UUID=0150934b:d058ce65:71f26e58:ecda2b67
ARRAY /dev/md/colossus.schoolpathways.com:7 level=raid1 num-devices=2 UUID=0a4f0169:cc7f4910:ba01c597:78a97e27
I've indented the 3 significant RAID devices on my sample system. These 3 UUIDs are the ones which need to be
replaced in both /etc/default/grub and /boot/grub2/grub.env Make copies before you change them!!!
But first, mv /etc/mdadm.conf >> /etc/mdadm.conf-source and run mdadm --detail --scan > /etc/mdadm.conf tp
create a valid new mdadm.conf file
f. Edit grub files with new MD device UUIDs obtained in STEP 3.
1. Make a copy of /etc/default grub > /etc/default/grub-source
2. Edit /etc/default/grub
3. Make a copy of /boot/grub2/grub.cfg > /boot/grub2/grub.cfg-source
EG: /etc/default/grub which is still has SOURCE UUIDs in it.
GRUB_CMDLINE_LINUX="crashkernel=auto rd.md.uuid=9cdf44f4:b1f0cf5f:d718cf64:2105f1ca
rd.md.uuid=30bc0880:dbcb8250:8a4c1367:727d43fa rd.md.uuid=0150934b:d058ce65:71f26e58:ecda2b67 rhgb quiet
1st UUID is ROOT partition, 2nd UUID is /boot partition and the 3rd is the swap partition.
You may want to verify trhat this order is the same for your OS before proceeding.
Changed /etc/default/grub with new UUIDs from STEP 3.
GRUB_CMDLINE_LINUX="crashkernel=auto rd.md.uuid=f7276bf6:4eceac1d:7fdd3faa:176919f1
rd.md.uuid=7d183f50:3e98b495:fa66269c:d99e1556 rd.md.uuid=b370675c:8f6ea1e6:63c2525f:b3386bd4 rhgb quiet"
Save changes.
3. Use the grub2-install command to re-write the MBR to your boot device. The boot device is usually /dev/sda.
# grub2-install /dev/sda
4. Edit the /boot/grub2/grub.cfg making the same changes as in /etc/default/grub. There will be multiple
entries as this file usually has several kernels available for the Grub boot menu. Note: The grub2-install
command MAY make the changes for you, but one should verify.
g. Re-generate initramfs boot file to reflect the changes.
1. Make a copy - Look for the newest version EG:
cp /boot/initramfs-3.10.0-1062.12.1.el7.x86_64.img > /boot/initramfs-3.10.0-1062.12.1.el7.x86_64.img.bak
2. Generate the new initramfs.
dracut -f /boot/initramfs-3.10.0-1062.12.1.el7.x86_64.img 3.10.0-1062.12.1.el7.x86_64
Notice that destination file requires no ".img" ending as it is the default.
4. All done! Fingers crossed! Reboot and check system.
-
- Novice
- Posts: 7
- Liked: 3 times
- Joined: Jan 15, 2019 11:17 pm
- Full Name: John Palys
- Contact:
-
- Novice
- Posts: 7
- Liked: 3 times
- Joined: Jan 15, 2019 11:17 pm
- Full Name: John Palys
- Contact:
Re: Veeam Agent Raid Recovery Solved A Procedure
Addendum for those who need to stop, remove MD devices to resize.
EG: md5 using /dev/sda5
1. mdadm --stop /dev/md5
2. mdadm --zero-superblock /dev/sda5
3. Go back to fdisk to resize and re-create the raid device.
EG: md5 using /dev/sda5
1. mdadm --stop /dev/md5
2. mdadm --zero-superblock /dev/sda5
3. Go back to fdisk to resize and re-create the raid device.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Veeam Agent Raid Recovery Solved A Procedure
Hi,
Great job! Thank you for this post, we'll consider making our restore procedure more automated
Thanks!
Great job! Thank you for this post, we'll consider making our restore procedure more automated
Thanks!
Who is online
Users browsing this forum: No registered users and 6 guests