Hi all,
Just want to share a subject with you, about DirectSAN configuration, and precisely about the way to protect VMFS Datastores that are presented (zoned and mapped) to a physical Veeam Server.
Two months ago, we’ve put in production a new Veeam server (V8u3) :
- Physical Dell Server with his own SAS local storage as repository
- Only one Veeam Proxy : the Veeam server itself
- The Veeam Server is “FC attached” to our Production SAN, to be able to backup VM using DirectSAN access.
No real problems at the end to put the infrastructure in place (thanks Veeam for simplicity…).
And no regret, DirectSAN access performance is here…
… Until we decided to test Full VM restoration:
As you may know, when you enable DirectSAN access on your physical Veeam proxy server, default behavior for full restauration is to use DirectSAN access too : When the restauration process begins, proxy is setup for DirectSAN, Datastore are connected, so let’go…
But by default, Your Veeam installer have set the Windows Server SAN policy to “Offline Shared”, so all your connected datastore are, by default and by design, not initialized, and so in Read Only mode.
At this point, any attempt to restore a VM via DirectSAN will failed with a pretty “VDDK Error 16000” (I.e. unable to write to your datastore).
So what… if we want to take advantage of DirectSAN for restauration, we have to put our datastores in Write access mode… but HOW ? What is the best way to do that ? I can say, after looking for a while, that any instruction about this is not clearly mentioned by Veeam documentation (and support confess me this after some rich discussions).
As far as I know, you have those options:
- You decide to give up DirectSAN access for restore operation. But even if you configure your proxy to failover to network (NBD) mode, this failover does not occur if the datastore is write protected (perhaps any new features about this in V9…). To use network mode, the only way is to install another Veeam proxy server and force this proxy to act as end point for sending data to your vCenter, not a really “sexy” solution…
- Initialize each disk/datastore from the Veeam Server point of view (Right-Click -> Initialize, from your Veeam server Disk Management Console). In theory, this not as risk (I’ll let you read the excellent article about this
http://rickardnobel.se/vmfs-exposed-to- ... an-policy/ ). Until, one of your colleague decides to work in the disk management console for any reason, and make THE mistake that he shouldn’t have done (delete volume on datastore for example). VMFS header corrupted and so on… Several dozen of VMs lost, Not really reassuring…
- The option that we choose : by default, we were all agree that letting datastores in read only mode was the best solution to protect our production VMware infra. So we’ve scripted, with Powershell, a way to simply put the concerned datastore in write mode using the SET-DISK command let. Example : you want to restore a full VM in the datastore that is viewed by you Veeam server as disk number 6, you pass this command before restoring :
* Set-Disk 6 –isreadonly $false
After restore operation, you rollback your datastore in read only mode :
* Set-Disk 6 –isreadonly $true
I don’t know if there is another solution… and I’m curious having your experience of this DirectSAN feature in production.
I think that Veeam should manage this datastore access layer automatically in the job (really simple in powershell). It would be the easiest and the most secured way of using DirectSAN access. But today, it’s not the case and I hope V9 will be the one…