I have a stack of 6 ESXi hosts in a cluster, and I installed the software iscsi adapter on one of the hosts. The two paths go to openfiler iscsi disks of 6.07TB each, then are presented to my VM as two hard drives. I need to retire that iscsi storage and move these drives to my shiny new vSAN, but can't figure out how.
I first tried a simple right-click / migrate / storage only, and get this error: "Virtual disk 'Hard disk 2' is a mapped direct-access LUN and larger than 2 TB. This configuration is not supported on the datastore 'vsanDatastore'."
Next, I tried to replicate the VM and all 3 drives (it has one "normal" vmdk as the c: drive) and tell the replication job to move the drives to vsan, but get the following error when I try: "Disk abc_1.vmdk has been skipped due to an unsupported type (raw device mapping in physical compatibility mode)."
How can I get these migrated to my vSAN?
-
- Expert
- Posts: 183
- Liked: 29 times
- Joined: Feb 23, 2017 10:26 pm
- Contact:
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: How do I migrate 6TB iSCSI attached hard drives to vSAN?
If I understand your setup correctly, you've created a ESXi cluster using VSAN in which one of the host is also connected to your OpenFiler via iSCSI and is presenting that volume to your VM using physical raw device mapping. You'd like to move this physical RDM to a traditional VMDK sitting on the vSAN.
If my understanding is correct, I can see several possible options:
1) Convert the physical raw device (pRDM) to a virtual raw device (vRDM), then you can use standard vSphere features like Storage vMotion to migrate the disk to a VMDK on the vSAN datastore. This wouldn't use Veeam in any way, and is probably the one with the least downtime since you only need to shutdown the VM for a few minutes to convert from pRDM to vRDM and then the storage vMotion can occur while the VM is running. There are plenty of articles online that explain this process, here's just one example: http://www.enterprisedaddy.com/2016/06/ ... m-to-vmdk/
2) Similar to above, convert the pRDM to a vRDM and then use Veeam to replicate the VM since Veeam is able to replicate vRDM volumes to VMDK. This approach would also minimize downtime, but you would need two small outages, one to convert from pRDM to vRDM, and then another when you performed your final replication and failover.
3) If you don't want to covert from pRDM to vRDM for whatever reason (perhaps you don't want to take the system down for the conversion or you don't want to make any changes to the original VM) you could install a Veeam Agent and backup the volume. You could then use the Veeam console to export the backed up volume as a VMDK storing it on the vSAN datastore. Once the VMDK is on the datastore, you just remove the pRDM from the VM and add the restored VMDK. This approach likely requires the most downtime since you'd need to shutdown the source VM during the entire time it takes to restore the 6TB VMDK since you wouldn't want the source data to change until after the new disk was attached.
I'm sure there are other methods, but these are the ones that jumped out immediately as the most obvious methods. Good luck!
If my understanding is correct, I can see several possible options:
1) Convert the physical raw device (pRDM) to a virtual raw device (vRDM), then you can use standard vSphere features like Storage vMotion to migrate the disk to a VMDK on the vSAN datastore. This wouldn't use Veeam in any way, and is probably the one with the least downtime since you only need to shutdown the VM for a few minutes to convert from pRDM to vRDM and then the storage vMotion can occur while the VM is running. There are plenty of articles online that explain this process, here's just one example: http://www.enterprisedaddy.com/2016/06/ ... m-to-vmdk/
2) Similar to above, convert the pRDM to a vRDM and then use Veeam to replicate the VM since Veeam is able to replicate vRDM volumes to VMDK. This approach would also minimize downtime, but you would need two small outages, one to convert from pRDM to vRDM, and then another when you performed your final replication and failover.
3) If you don't want to covert from pRDM to vRDM for whatever reason (perhaps you don't want to take the system down for the conversion or you don't want to make any changes to the original VM) you could install a Veeam Agent and backup the volume. You could then use the Veeam console to export the backed up volume as a VMDK storing it on the vSAN datastore. Once the VMDK is on the datastore, you just remove the pRDM from the VM and add the restored VMDK. This approach likely requires the most downtime since you'd need to shutdown the source VM during the entire time it takes to restore the 6TB VMDK since you wouldn't want the source data to change until after the new disk was attached.
I'm sure there are other methods, but these are the ones that jumped out immediately as the most obvious methods. Good luck!
-
- Expert
- Posts: 183
- Liked: 29 times
- Joined: Feb 23, 2017 10:26 pm
- Contact:
Re: How do I migrate 6TB iSCSI attached hard drives to vSAN?
Thanks tsightler; your understanding is correct! And I see now that you're correct; this turns out to be mostly a vmware thing, not necessarily a veeam thing.
I *think* when I attached the RDMs (it's been almost a year ago now) that I was unable to attach them as vRDMs because they were over 2TB. However, the latest vmware KBs say that vsphere 5.5 and up can handle up to 64TB as vRDMs, so I'm not sure what's up with that (I'm running vsphere 6). Today at lunchtime I'm going to try converting the drives to vRDMs and see what happens.
Oh...it actually may be that the openfiler system may not support vfms 5; that would explain why I had to choose "physical". Ugh. If that's the case, then I'll have to go the painfully slow route you describe in point 3..."shut down the vm, export disks to vmdk and store them on vsan, attach those to the vm, spin it back up".
I'll update this when I find out for sure.
I *think* when I attached the RDMs (it's been almost a year ago now) that I was unable to attach them as vRDMs because they were over 2TB. However, the latest vmware KBs say that vsphere 5.5 and up can handle up to 64TB as vRDMs, so I'm not sure what's up with that (I'm running vsphere 6). Today at lunchtime I'm going to try converting the drives to vRDMs and see what happens.
Oh...it actually may be that the openfiler system may not support vfms 5; that would explain why I had to choose "physical". Ugh. If that's the case, then I'll have to go the painfully slow route you describe in point 3..."shut down the vm, export disks to vmdk and store them on vsan, attach those to the vm, spin it back up".
I'll update this when I find out for sure.
-
- Expert
- Posts: 183
- Liked: 29 times
- Joined: Feb 23, 2017 10:26 pm
- Contact:
Re: How do I migrate 6TB iSCSI attached hard drives to vSAN?
I was able to detach / re-attach those RDMs as virtual, no issues at all. I'll begin a storage vmotion to my vsan tonight.
Who is online
Users browsing this forum: Amazon [Bot], Spex and 67 guests