-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Sep 04, 2015 3:08 pm
- Contact:
Restore Files (FLR) from Thin-Provisioned Linux Drive
Hello,
I've had a support case open for a few months now (01762630) about not seeing any data files when attempting to do a Linux FLR on our Linux VM. The FLR appliance opens and runs fine, it just doesn't mount all the drives, only the boot device sda1. After I was escalated to Tier 3 support, they informed me that there currently isn't any support for Linux FLR if the Linux VM has thin provisioned drives. They're able to see the devices, the FLR appliance can't mount them. Support will be getting back to me eventually with a fix report, but no guarantees whether it'll be released with Veeam B&R v9 or a future release.
I should point out Microsoft's best practice for Linux VMs in Hyper-v is to thin provision the drive: https://technet.microsoft.com/en-us/lib ... 20239.aspx
Here's the command for creating the Dynamic VHD: New-VHD –Path C:\MyVHDs\test.vhdx –SizeBytes 127GB –Dynamic –BlockSizeBytes 1MB
We've been holding off on putting this Linux VM into production until we have a way to restore files from it; it'll be one of our main development servers. I'm currently looking for workarounds and wonder if anyone has any suggestions.
Option 1: Restore the whole VM
Not reasonable, can't wait over an hour to restore the VM to access the files.
Option 2: Mount device remotely from FLR appliance (can't get this working)
One slick feature is the ability to log into the FLR appliance while it's running. I'm able to putty into it and move around the successfully mounted file system (just the grub2 boot loader device sda1). With this, I was able to see the block level device sda2 that I wanted to mount. So I downloaded sshfs and was able to mount the FLR appliance's /dev folder which contains sda2 on my Linux VM in my mount folder using the below command:
[: mnt]# sudo sshfs root@192.168.123.152:/dev/ /mnt/veeam_temp/
From here, I've been trying to mount the /mnt/veeam_temp/sda2 device onto my Linux VM, but haven't been able to because the device is read-only. I've looked at a ton of resources online and no suggestions work. Trying to mount with:
[: mnt]# mount /mnt/veeam_temp/sda2 /mnt/veeam_sda2/
mount: /mnt/veeam_temp/sda2 is write-protected, mounting read-only
mount: cannot mount /mnt/veeam_temp/sda2 read-only
Trying to mount with -o ro doesn't work either:
[: mnt]# mount -o ro /mnt/veeam_temp/sda2 /mnt/veeam_sda2/
mount: cannot mount /mnt/veeam_temp/sda2 read-only
and I can't seem find a way to get past this particular issue.
I'd like to try using nbd to access the block device remotely, but the FLR appliance doesn't recognize nbd as it's not on the boot drive. Even if it did, I'm not certain this would work since FLR is serving up the files as read only.
Option 3: Use Instant VM Recovery
This seems like it would be a good option, but I'm wondering if anyone else has attempted this. I can't allow this VM on our network as the source VM is live and running on our domain. It should be possible to start the VM without it being on the network, then use an internal Hyper-V switch to get access to the Host. From here, the files can be transferred from the recovered VM to the Host, and from the Host to the live VM. Still working through this process.
Does anyone have any other suggestions?
I've had a support case open for a few months now (01762630) about not seeing any data files when attempting to do a Linux FLR on our Linux VM. The FLR appliance opens and runs fine, it just doesn't mount all the drives, only the boot device sda1. After I was escalated to Tier 3 support, they informed me that there currently isn't any support for Linux FLR if the Linux VM has thin provisioned drives. They're able to see the devices, the FLR appliance can't mount them. Support will be getting back to me eventually with a fix report, but no guarantees whether it'll be released with Veeam B&R v9 or a future release.
I should point out Microsoft's best practice for Linux VMs in Hyper-v is to thin provision the drive: https://technet.microsoft.com/en-us/lib ... 20239.aspx
Here's the command for creating the Dynamic VHD: New-VHD –Path C:\MyVHDs\test.vhdx –SizeBytes 127GB –Dynamic –BlockSizeBytes 1MB
We've been holding off on putting this Linux VM into production until we have a way to restore files from it; it'll be one of our main development servers. I'm currently looking for workarounds and wonder if anyone has any suggestions.
Option 1: Restore the whole VM
Not reasonable, can't wait over an hour to restore the VM to access the files.
Option 2: Mount device remotely from FLR appliance (can't get this working)
One slick feature is the ability to log into the FLR appliance while it's running. I'm able to putty into it and move around the successfully mounted file system (just the grub2 boot loader device sda1). With this, I was able to see the block level device sda2 that I wanted to mount. So I downloaded sshfs and was able to mount the FLR appliance's /dev folder which contains sda2 on my Linux VM in my mount folder using the below command:
[: mnt]# sudo sshfs root@192.168.123.152:/dev/ /mnt/veeam_temp/
From here, I've been trying to mount the /mnt/veeam_temp/sda2 device onto my Linux VM, but haven't been able to because the device is read-only. I've looked at a ton of resources online and no suggestions work. Trying to mount with:
[: mnt]# mount /mnt/veeam_temp/sda2 /mnt/veeam_sda2/
mount: /mnt/veeam_temp/sda2 is write-protected, mounting read-only
mount: cannot mount /mnt/veeam_temp/sda2 read-only
Trying to mount with -o ro doesn't work either:
[: mnt]# mount -o ro /mnt/veeam_temp/sda2 /mnt/veeam_sda2/
mount: cannot mount /mnt/veeam_temp/sda2 read-only
and I can't seem find a way to get past this particular issue.
I'd like to try using nbd to access the block device remotely, but the FLR appliance doesn't recognize nbd as it's not on the boot drive. Even if it did, I'm not certain this would work since FLR is serving up the files as read only.
Option 3: Use Instant VM Recovery
This seems like it would be a good option, but I'm wondering if anyone else has attempted this. I can't allow this VM on our network as the source VM is live and running on our domain. It should be possible to start the VM without it being on the network, then use an internal Hyper-V switch to get access to the Host. From here, the files can be transferred from the recovered VM to the Host, and from the Host to the live VM. Still working through this process.
Does anyone have any other suggestions?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Restore Files (FLR) from Thin-Provisioned Linux Drive
Hi,
Thanks
That's exactly the way how Virtual Lab works. You can create one using Veeam, or manually, and perform an Instant Recovery inside the isolated network. You've mentioned that you were able to see sda2 device, have you tried to detach the virtual drive from FLR appliance and attach to the original Linux VM and mount it there?It should be possible to start the VM without it being on the network, then use an internal Hyper-V switch to get access to the Host. From here, the files can be transferred from the recovered VM to the Host, and from the Host to the live VM. Still working through this process.
Thanks
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Sep 04, 2015 3:08 pm
- Contact:
Re: Restore Files (FLR) from Thin-Provisioned Linux Drive
I haven't been able to get around to testing the Virtual Lab functionality, though most of it was configured. Ran into an issue with the Windows Firewall being enabled on our remote DR server after turning the appliance on, which blocked all RDP access. I'll start looking into this again.
As far as the sda2 device, how would I go about detaching it from the FLR appliance? Maybe that's why I'm not able to mount it on the live VM, because it's in use even though it's not mounted?
I'm not able to run some commands in putty connected to the FLR appliance, lsblk and findmnt being a few. I am able to execute blkid:
Here's what df shows:
/sys/dev/block seems to have the Maj:Min device number (8:2) of the device as it's mounted on the live VM, but not the mapper for the mount point (253:6):
If it helps at all, here's the setup on the live VM:
As far as the sda2 device, how would I go about detaching it from the FLR appliance? Maybe that's why I'm not able to mount it on the live VM, because it's in use even though it's not mounted?
I'm not able to run some commands in putty connected to the FLR appliance, lsblk and findmnt being a few. I am able to execute blkid:
Code: Select all
# blkid
/dev/sda1: UUID="b2b32058-b53a-4255-8f8f-dd234ee16126" TYPE="xfs" PARTUUID="0009bdec-01"
/dev/sda2: UUID="uvPr7F-M4fF-vOWz-pff2-E83r-v7Qc-tbRARo" TYPE="LVM2_member" PARTUUID="0009bdec-02"
/dev/mapper/rhel_redactedin1-swap: LABEL="swap" UUID="5b4d8c48-f84b-4079-88f6-e78163346ef1" TYPE="swap"
Code: Select all
# df
Filesystem 1K-blocks Used Available Use% Mounted on
none 65536 20012 45524 31% /tmp
/dev/sda1 508588 176620 331968 35% /media/sda1
Code: Select all
# ls /sys/dev/block
11:0 1:1 1:11 1:13 1:15 1:3 1:5 1:7 1:9 253:0 2:0 7:1 7:3 7:5 7:7 8:1
1:0 1:10 1:12 1:14 1:2 1:4 1:6 1:8 252:0 253:1 7:0 7:2 7:4 7:6 8:0 8:2
If it helps at all, here's the setup on the live VM:
Code: Select all
[: mnt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 800G 0 disk
|-sda1 8:1 0 500M 0 part /boot
`-sda2 8:2 0 799.4G 0 part
|-rhel_redactedin1-pool00_tmeta 253:0 0 100M 0 lvm
| `-rhel_redactedin1-pool00-tpool 253:2 0 778.5G 0 lvm
| |-rhel_redactedin1-root 253:3 0 80G 0 lvm /
| |-rhel_redactedin1-pool00 253:5 0 778.5G 0 lvm
| `-rhel_redactedin1-data 253:6 0 698.5G 0 lvm /data
|-rhel_redactedin1-pool00_tdata 253:1 0 778.5G 0 lvm
| `-rhel_redactedin1-pool00-tpool 253:2 0 778.5G 0 lvm
| |-rhel_redactedin1-root 253:3 0 80G 0 lvm /
| |-rhel_redactedin1-pool00 253:5 0 778.5G 0 lvm
| `-rhel_redactedin1-data 253:6 0 698.5G 0 lvm /data
`-rhel_redactedin1-swap 253:4 0 5G 0 lvm [SWAP]
sr0 11:0 1 26.3M 0 rom
[: mnt]# blkid
/dev/block/8:2: UUID="uvPr7F-M4fF-vOWz-pff2-E83r-v7Qc-tbRARo" TYPE="LVM2_member"
/dev/block/253:3: LABEL="root" UUID="e80fb2da-d897-4f57-aa85-d987957d53e1" TYPE="xfs"
/dev/block/8:1: UUID="b2b32058-b53a-4255-8f8f-dd234ee16126" TYPE="xfs"
/dev/block/253:4: LABEL="swap" UUID="5b4d8c48-f84b-4079-88f6-e78163346ef1" TYPE="swap"
/dev/mapper/rhel_redactedin1-data: LABEL="Data" UUID="d64b320f-ed78-4599-9cc3-22c21247ae48" TYPE="xfs"
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Restore Files (FLR) from Thin-Provisioned Linux Drive
I meant to detach the vhdx that contains sda2 from the FLR appliance on a HyperV level and attach the vhdx to some other Linux VM (original one for example) and see if it can mount the drive.As far as the sda2 device, how would I go about detaching it from the FLR appliance?
UPDATE: Could you please post the output of lvs command?
Thanks
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Sep 04, 2015 3:08 pm
- Contact:
Re: Restore Files (FLR) from Thin-Provisioned Linux Drive
Hi PTide,
I've got a usable workaround with VM Instant Recovery and using an Hyper-V Internal Switch on the VM. After the recovered VM is running, I change the IP of the recovered VM and can putty in, mount a share from the host server, and transfer files to the VM Host server, then transfer the files from the VM Host to the current live VM. This about as much effort (slightly less) as attaching the FLR VHD to another Linux VM.
We do not need to pursue this further unless you'd like to find out more information about this issue or the FLR appliance. Thanks for all your help thus far.
To answer your question, the lvs command on the FLR appliance returns:
And here's the output of the lvdisplay command on the FLR appliance, the data volume is my destination (note the Status on the majority of the volumes):
Support has just reported to me that a fix for this issue would require the FLR appliance kernel to be updated.
I've got a usable workaround with VM Instant Recovery and using an Hyper-V Internal Switch on the VM. After the recovered VM is running, I change the IP of the recovered VM and can putty in, mount a share from the host server, and transfer files to the VM Host server, then transfer the files from the VM Host to the current live VM. This about as much effort (slightly less) as attaching the FLR VHD to another Linux VM.
We do not need to pursue this further unless you'd like to find out more information about this issue or the FLR appliance. Thanks for all your help thus far.
To answer your question, the lvs command on the FLR appliance returns:
Code: Select all
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data rhel_redactedin1 Vwi---tz-- 698.49g pool00
pool00 rhel_redactedin1 twi---tz-- 778.49g
root rhel_redactedin1 Vwi---tz-- 80.00g pool00
swap rhel_redactedin1 -wi-a----- 4.94g
Code: Select all
# lvdisplay
--- Logical volume ---
LV Name pool00
VG Name rhel_redactedin1
LV UUID 36pr27-KQZV-cx6G-Jk8L-1RGT-ZsmV-7toDtf
LV Write Access read/write
LV Creation host, time localhost, 2015-10-16 21:44:15 +0400
LV Pool metadata pool00_tmeta
LV Pool data pool00_tdata
LV Status NOT available
LV Size 778.49 GiB
Current LE 199293
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/rhel_redactedin1/root
LV Name root
VG Name rhel_redactedin1
LV UUID KSYWiw-nmGo-IhS4-50qg-PWJv-kQJG-Zwe55j
LV Write Access read/write
LV Creation host, time localhost, 2015-10-16 21:44:15 +0400
LV Pool name pool00
LV Status NOT available
LV Size 80.00 GiB
Current LE 20480
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/rhel_redactedin1/data
LV Name data
VG Name rhel_redactedin1
LV UUID ClUjBN-H0j1-4Wsl-EJDr-r49L-AVVs-xqNR2i
LV Write Access read/write
LV Creation host, time localhost, 2015-10-16 21:44:16 +0400
LV Pool name pool00
LV Status NOT available
LV Size 698.49 GiB
Current LE 178813
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/rhel_redactedin1/swap
LV Name swap
VG Name rhel_redactedin1
LV UUID oUoRSR-AtRd-eek0-hQny-ekEv-TbCD-WaqZEr
LV Write Access read/write
LV Creation host, time localhost, 2015-10-16 21:44:19 +0400
LV Status available
# open 0
LV Size 4.94 GiB
Current LE 1264
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
Support has just reported to me that a fix for this issue would require the FLR appliance kernel to be updated.
Who is online
Users browsing this forum: No registered users and 18 guests