by mascaroit » Mon Apr 02, 2012 12:18 pm people like this post
I am evaluating Veeam B&R v6. I have a few virtual servers that have iSCSI targets for data. I have read several mentions about mounting them as vRDM's and then Cold vStorage Motioning them to another datastore as that will convert them to VMDK's and then they will be visible to Veeam.
Does anyone know of a rock-solid walkthrough to accomplish this?
by foggy » Mon Apr 02, 2012 1:55 pm people like this post
Tony, you can simply map the LUN to guest using virtual RDM mode, no need to convert it to VMDK actually as Veeam B&R can correctly process vRDM disks (moreover, it will convert vRDMs to VMDK itself, during backup).
by mascaroit » Mon Apr 02, 2012 5:20 pm people like this post
OK, here is what I did: Disconnected the volume from the iSCSI initiator within the guest OS. Set the iSCSI initator service to manual. Shut the virtual machine down. Edited the virtual machine's settings to add a vRDM mapping to the data volume on my SAN. Booted the virtual machine. The drive and data appeared as expected and not through the iSCSI initiator. WIN!
Then, I opened Veeam B&R v6 and added that virtual machine to a backup job. The vmdk for the guest OS was discovered, but not the vRDM. Going by what Foggy posted earlier, if I run a backup of the virtual machine, Veeam would somehow convert this vRDM to a vmdk even though it does not appear in the backup selection, is this correct?
by Gostev » Mon Apr 02, 2012 8:34 pm people like this post
Not sure what you mean by "backup selection", as there is no place in backup job wizard where we show individual disks for the VM. Yes, during the backup your vRDM disk should get written into the backup file as a VMDK file.
by mascaroit » Tue Apr 03, 2012 6:53 pm people like this post
OK. I ran a backup and the data was written to the backup. However, in order to get the job to process correctly, I had to create a volume on my Equalogic SAN slightly bigger than the one I was backing up and have the vRDM mapping pointing to this empty volume. I do not get it. Is Veeam storing the snapshot in the empty volume? Now I am occupying over a Terabyte of my SAN just to back up a 510GB volume. I guess I am totally lost now.
by Gostev » Tue Apr 03, 2012 7:06 pm people like this post
You really want to check out VMware admin guide, or you will remain lost forever
VMware snapshots do require a place to store snapshot file, this is where any writes are redirected while the snapshot exist. Since the snapshot is only present for the duration of backup (minutes/hours), the disk space required is usually just a few GB (just enough to host all blocks changed in the virtual disk for the duration of backup).
by mascaroit » Tue Apr 03, 2012 7:16 pm people like this post
So instead of having a greater than or equal to volume, I should just create a small 4 or 5 GB volume to store the snapshot data during backup? I have read the storage section of the admin guide and it is very vague about how to accomplish what I am trying to do. I cannot be the only guy here that is in the same boat.
by mascaroit » Tue Apr 03, 2012 7:33 pm people like this post
The volume where the vmx for the vm only has 53GB of free space. Initially I added the vRDM and told it to "Store with the virtual machine" Then ran the backup. It failed. The following is the entry in Enterprise Manager: 2012-04-03 08:47:12Error: File is larger than the maximum size supported by datastore So that tells me I need more than 53GB to store the snapshot blocks. This is the first time I have backed up this vm and data in Veeam, so I know that it is a full backup. Now, if I run another backup and leave that 520GB empty volume present to process the snapshot, subsequent backups are just changed blocks which should not be much. That leads me to believe that I would be able to discard the 520GB volume after the first full backup has run. Then I could re-map the vRDM to the datastore that the vmx resides in. Am I on the right track here?
by Gostev » Tue Apr 03, 2012 7:42 pm people like this post
This error has nothing to deal with the free space, but with the block size the VMX datastore is formatted with. Just search this forum for the error you are getting, there are plenty (hundreds?) of posts regarding it. Or better yet, upgrade to ESXi 5 and VMFS5, which does not have those stupid variable block sizes at all.
by Gostev » Tue Apr 03, 2012 7:46 pm people like this post
And even better, get rid of your vRDM disk altogether, and use VMDK - and you will thank me later. There are literally no benefits from using vRDM these days, however there is a large amount of drawbacks.
by mascaroit » Tue Apr 03, 2012 8:25 pm people like this post
I am already on ESXi 5 but we did not upgrade the datastores to VMFS 5 because it seemed very risky. It seems that upgraded VMFS 5 volumes retain the block sizes (1MB) that are currently on the volumes where our VMs are stored. So in reality, should I create new volumes with VMFS 5 and just vStorage Motion the VMs? Seems there is less risk in going that route.
I cannot say how much I appreciate your help. I have got to be trying your patience. The reason I wanted to use vRDM was there is no real way to get the data that is being stored via iSCSI initiator in the vm into a vmdk other than robocopy as far as i know.