Host-based backup of VMware vSphere VMs.
Post Reply
timstumbo
Enthusiast
Posts: 25
Liked: 2 times
Joined: Mar 24, 2013 6:18 pm
Full Name: Tim Stumbo
Contact:

Direct SAN Access connection on virtual backup server

Post by timstumbo »

I have a single physical host separate from our production environment running VMWare for three virtual servers for my backup environment.

I have one VM host acting as the primary backup server, a second VM host that acts at the proxy and also host the WAN accelerator. The third VM host is the repository server.

I'm currently using the Direct SAN Access mode on the proxy and it's working, I'm just not sure if I have it setup the correct way. The SAN access is configured in the ESXi host and the VM's on that host can see the data stores through that. After reading around I've seen a lot on using the Windows Microsoft iSCSI Initiator for connecting the servers to the SAN. Obviously this method would have to be used if the backup server was a physical server, but is this also the recommend method even for a virtual backup server?

Thanks
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

Hi,

Firstly, Direct SAN is only applicable if you use either FC or iSCSI connection between proxy and production storage.

Secondly, there is no point using Direct SAN with virtual proxy because Direct SAN is intended to bypass ESXi I/O stack thus increasing backup performance.

May I ask you what is your bottleneck statistics? Also, do you see [san] anywhere in your session log?

Thank you.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by foggy »

In your case I'd put a proxy VM on the host where VMs you backup reside, to utilize hotadd transport mode.

Also, are you storing your backups on the repository VM? This is not considered as best practice.
timstumbo
Enthusiast
Posts: 25
Liked: 2 times
Joined: Mar 24, 2013 6:18 pm
Full Name: Tim Stumbo
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by timstumbo »

Thanks for the responses!

PTide, I was under the impression I was utilizing a iSCSI connection between the proxy and the production storage. The iSCSI connection is just made within the VMWare configuration instead of using the Microsoft iSCSI Initiator. I currently have Direct SAN mode selected and I have the Failover option unchecked and the backup is running and completing. That means it has to be working over Direct SAN Access right?

So there's no point in using Direct SAN on a virtual host even if that host doesn't reside within the production host cluster?

The bottleneck is showing as the source on all the test backups I've done so far. I'm getting about 75MB/s right now on the backups.

Foggy, I'll try install another proxy within the production cluster to see if that increases backup performance.

No we have a dedicated VM that's directly connected to a Synology RS815+, the repository lives on the iSCSI LUNS on the Synology.

Thanks for all the help!
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

The iSCSI connection is just made within the VMWare configuration instead of using the Microsoft iSCSI Initiator.
Do you mean that your production storage is connected to your standalone physical host (the one with proxy VM inside) via iSCSI and presented to VMware host as a Datastore? And no iSCSI connection between production storage and proxy?
So there's no point in using Direct SAN on a virtual host even if that host doesn't reside within the production host cluster?
Correct.
No we have a dedicated VM that's directly connected to a Synology RS815+, the repository lives on the iSCSI LUNS on the Synology.
Why not to connect your iSCSI LUNs directly to VBR VM and assign repository role to VBR?
timstumbo
Enthusiast
Posts: 25
Liked: 2 times
Joined: Mar 24, 2013 6:18 pm
Full Name: Tim Stumbo
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by timstumbo »

Yes, the production storage is connected directly to the standalone physical host bypassing the production network. (Utilizing the isolated storage network)

I can connect the iSCSI LUNs directly to the primary backup server, I was just testing it on a separate server to see if it increased performance. That's also why I setup a separate VM for the proxy as well, just to see if it made a difference from a performance standpoint.

I'm going to try to run everything off a single VM and see if I get the same performance, If so I'll get rid of the other 2 VM's. A lot easier to manager 1 server rather than 3.

Is 75MB/s a decent speed considering my setup?

Thanks
timstumbo
Enthusiast
Posts: 25
Liked: 2 times
Joined: Mar 24, 2013 6:18 pm
Full Name: Tim Stumbo
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by timstumbo »

We have Veeam Essentials licensing that's why we can only use the 3 host in the production cluster. We went ahead and removed the older 4th host so we could stay with the Essentials licensing until we could budget for upgrading our licensing.

That's why I have the isolated 4th host that's not in our production environment. It already had ESXi setup on it so I just figured it would be easier to have B&R as a VM from a management standpoint instead of turning the host into a physical box and installing B&R. I'm willing to do that if it would benefit us I just don't see how it could.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

The backup proxy using the Direct SAN Access transport mode must have a direct access to the production storage via a hardware or software HBA. Please check this screenshot and compare to your own job session log - if Direct SAN was utilized then log would contain [san]. I suspect that in your case it's [nbd].
Is 75MB/s a decent speed considering my setup?
Not bad, however a faster storage could increase the speed.
timstumbo
Enthusiast
Posts: 25
Liked: 2 times
Joined: Mar 24, 2013 6:18 pm
Full Name: Tim Stumbo
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by timstumbo »

Looks like it's utilizing both. Here is a portion of the log..

Code: Select all

[i][18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Detecting san access level
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: disks ['60060160ca913a00b8aeed542e342937565241494420','60060160ca913a0033b2ed5409f3d9c8565241494420','6c81f660e99fed001c89ce7d1611c51a504552432048','60060160ca913a0061b1ed5452afcea0565241494420','60060160ca913a0066ec1355dca7df53565241494420','60060160ca913a00dcafed5461dfc803565241494420','60060160ca913a00acb6ed54016cfab5565241494420','6c81f660e99fed001c89ce5213806420504552432048','60060160ca913a0089b0ed54b5cdb8a6565241494420','60060160ca913a0048b3ed5407276e0f565241494420','60060160ca913a009fb5ed54b5e14f29565241494420','60060160ca913a0040b4ed5423737dbd565241494420','60060160ca913a002b8bf15528241834565241494420','6848f690ee7597001c889c580aff0f1c504552432048','6b083fe0eb0291001c7ef20105648455504552432048']
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Proxy [BACKUPPROXY]: only some disks are accessible through san and can failover to network mode
[18.12.2015 08:51:07] <01> Info                     (15 proxy disk(s) correspond to san, but vm's disks are on 1 datastores) 
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Detected san storage access level for proxy [BACKUPPROXY] - [PartialSan]
[18.12.2015 08:51:07] <01> Info     [ProxyDetector] Detected mode [san;nbd] for proxy [BACKUPPROXY][/i]
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

Then, if you are 100% positive that your proxy server is NOT connected via Microsoft iSCSI to your production storage, please open a case with support and post your case ID here because what I can see in your log is not normal.

Thank you.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

Could you please clarify if any of production datastore LUNs are visible from proxy's "Device manager" panel?

Thank you.
danswartz
Veteran
Posts: 264
Liked: 30 times
Joined: Apr 26, 2013 4:53 pm
Full Name: Dan Swartzendruber
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by danswartz »

PTide wrote: Secondly, there is no point using Direct SAN with virtual proxy because Direct SAN is intended to bypass ESXi I/O stack thus increasing backup performance..
This isn't totally true. I run a virtual backup proxy, and found direct SAN mode to be about 2X as fast as NBD. I use a vmxnet3 vnic on the proxy, which is a virtualized 10gbe nic. I had also played with passing through a 10gbe nic to the virtualized proxy, and that worked extremely well too.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

found direct SAN mode to be about 2X as fast as NBD. I use a vmxnet3 vnic on the proxy, which is a virtualized 10gbe nic
If so, may I ask you what kind of NIC were you using as your ESXi management interface?

Keep in mind that NBD is actually the least efficient method, however it works good on 10Gb. Also with physical proxy the difference in performance direct-SAN vs NBD would be even more noticeable.
danswartz
Veteran
Posts: 264
Liked: 30 times
Joined: Apr 26, 2013 4:53 pm
Full Name: Dan Swartzendruber
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by danswartz »

The management NIC is 1gb only. Each host has only one 10gb interface, and that is for storage only. I know physical would be even better, but that would involve spinning up a physical host for only that purpose - not worth it to me...
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

The management NIC is 1gb only.
This. If an NBD mode is utilized the system retrieves the VM disks via the ESX(i) management interface. In the case that you've described NBD via 1Gb NIC competes with a direct-SAN via 10Gb NIC, which is not fair :wink: Another thing to mention about direct-SAN via 10Gbit vs NBD via 1Gbit - only 2x faster, that's not very impressive.
danswartz
Veteran
Posts: 264
Liked: 30 times
Joined: Apr 26, 2013 4:53 pm
Full Name: Dan Swartzendruber
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by danswartz »

I seem to recall (back when I had only 1gb), that NBD was still quite a bit less performant than direct SAN. It's certainly possible that it would be acceptable performance if I *did* have 10gb for the management interface, but I can't justify it at this time :)
GarethUK
Influencer
Posts: 21
Liked: 2 times
Joined: Mar 21, 2014 11:41 am
Full Name: Gareth
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by GarethUK »

PTide wrote:This. If an NBD mode is utilized the system retrieves the VM disks via the ESX(i) management interface. In the case that you've described NBD via 1Gb NIC competes with a direct-SAN via 10Gb NIC, which is not fair :wink: Another thing to mention about direct-SAN via 10Gbit vs NBD via 1Gbit - only 2x faster, that's not very impressive.
There are other issues with NBD. The direct from SAN backup has many advantages. We use a large number of virtualised proxy servers as data movers moving data from HP VSA via stock Window ISCSI vmxnet3 and get very fast backups. We are planning to remove the virtualised proxy servers because we can now get 40Gbps+ between our SAN and our backup repo's which wasn't previously possible (network topology issues). The repo's we are using have ample capacity to undertake the data mover role.

What danwatz describes is similar to some of our blade configs. So not unusual.

Our installation is not small and we generally get 100% backup success rates.

Regards,

Gareth
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Direct SAN Access connection on virtual backup server

Post by PTide »

There are other issues with NBD. The direct from SAN backup has many advantages
I didn't want it to seem like I was defending NBD, I just wanted to point out that the comparison of 1Gb NBD and 10Gb direct-SAN via virtual proxy is not valid. I totally agree that NBD is the last resort in case no other option is available. :)
The repo's we are using have ample capacity to undertake the data mover role.
Sounds good. Just don't forget to assign proxy roles to repos. Otherwise, if no suitable proxies detected, your data may flow through Veeam Server (default proxy) or the whole job will failover to network mode. Also keep in mind system requirements.

Thank you for sharing your experience!
Post Reply

Who is online

Users browsing this forum: Amazon [Bot] and 75 guests