-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Mar 24, 2013 6:18 pm
- Full Name: Tim Stumbo
- Contact:
Direct SAN Access connection on virtual backup server
I have a single physical host separate from our production environment running VMWare for three virtual servers for my backup environment.
I have one VM host acting as the primary backup server, a second VM host that acts at the proxy and also host the WAN accelerator. The third VM host is the repository server.
I'm currently using the Direct SAN Access mode on the proxy and it's working, I'm just not sure if I have it setup the correct way. The SAN access is configured in the ESXi host and the VM's on that host can see the data stores through that. After reading around I've seen a lot on using the Windows Microsoft iSCSI Initiator for connecting the servers to the SAN. Obviously this method would have to be used if the backup server was a physical server, but is this also the recommend method even for a virtual backup server?
Thanks
I have one VM host acting as the primary backup server, a second VM host that acts at the proxy and also host the WAN accelerator. The third VM host is the repository server.
I'm currently using the Direct SAN Access mode on the proxy and it's working, I'm just not sure if I have it setup the correct way. The SAN access is configured in the ESXi host and the VM's on that host can see the data stores through that. After reading around I've seen a lot on using the Windows Microsoft iSCSI Initiator for connecting the servers to the SAN. Obviously this method would have to be used if the backup server was a physical server, but is this also the recommend method even for a virtual backup server?
Thanks
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
Hi,
Firstly, Direct SAN is only applicable if you use either FC or iSCSI connection between proxy and production storage.
Secondly, there is no point using Direct SAN with virtual proxy because Direct SAN is intended to bypass ESXi I/O stack thus increasing backup performance.
May I ask you what is your bottleneck statistics? Also, do you see [san] anywhere in your session log?
Thank you.
Firstly, Direct SAN is only applicable if you use either FC or iSCSI connection between proxy and production storage.
Secondly, there is no point using Direct SAN with virtual proxy because Direct SAN is intended to bypass ESXi I/O stack thus increasing backup performance.
May I ask you what is your bottleneck statistics? Also, do you see [san] anywhere in your session log?
Thank you.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Direct SAN Access connection on virtual backup server
In your case I'd put a proxy VM on the host where VMs you backup reside, to utilize hotadd transport mode.
Also, are you storing your backups on the repository VM? This is not considered as best practice.
Also, are you storing your backups on the repository VM? This is not considered as best practice.
-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Mar 24, 2013 6:18 pm
- Full Name: Tim Stumbo
- Contact:
Re: Direct SAN Access connection on virtual backup server
Thanks for the responses!
PTide, I was under the impression I was utilizing a iSCSI connection between the proxy and the production storage. The iSCSI connection is just made within the VMWare configuration instead of using the Microsoft iSCSI Initiator. I currently have Direct SAN mode selected and I have the Failover option unchecked and the backup is running and completing. That means it has to be working over Direct SAN Access right?
So there's no point in using Direct SAN on a virtual host even if that host doesn't reside within the production host cluster?
The bottleneck is showing as the source on all the test backups I've done so far. I'm getting about 75MB/s right now on the backups.
Foggy, I'll try install another proxy within the production cluster to see if that increases backup performance.
No we have a dedicated VM that's directly connected to a Synology RS815+, the repository lives on the iSCSI LUNS on the Synology.
Thanks for all the help!
PTide, I was under the impression I was utilizing a iSCSI connection between the proxy and the production storage. The iSCSI connection is just made within the VMWare configuration instead of using the Microsoft iSCSI Initiator. I currently have Direct SAN mode selected and I have the Failover option unchecked and the backup is running and completing. That means it has to be working over Direct SAN Access right?
So there's no point in using Direct SAN on a virtual host even if that host doesn't reside within the production host cluster?
The bottleneck is showing as the source on all the test backups I've done so far. I'm getting about 75MB/s right now on the backups.
Foggy, I'll try install another proxy within the production cluster to see if that increases backup performance.
No we have a dedicated VM that's directly connected to a Synology RS815+, the repository lives on the iSCSI LUNS on the Synology.
Thanks for all the help!
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
Do you mean that your production storage is connected to your standalone physical host (the one with proxy VM inside) via iSCSI and presented to VMware host as a Datastore? And no iSCSI connection between production storage and proxy?The iSCSI connection is just made within the VMWare configuration instead of using the Microsoft iSCSI Initiator.
Correct.So there's no point in using Direct SAN on a virtual host even if that host doesn't reside within the production host cluster?
Why not to connect your iSCSI LUNs directly to VBR VM and assign repository role to VBR?No we have a dedicated VM that's directly connected to a Synology RS815+, the repository lives on the iSCSI LUNS on the Synology.
-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Mar 24, 2013 6:18 pm
- Full Name: Tim Stumbo
- Contact:
Re: Direct SAN Access connection on virtual backup server
Yes, the production storage is connected directly to the standalone physical host bypassing the production network. (Utilizing the isolated storage network)
I can connect the iSCSI LUNs directly to the primary backup server, I was just testing it on a separate server to see if it increased performance. That's also why I setup a separate VM for the proxy as well, just to see if it made a difference from a performance standpoint.
I'm going to try to run everything off a single VM and see if I get the same performance, If so I'll get rid of the other 2 VM's. A lot easier to manager 1 server rather than 3.
Is 75MB/s a decent speed considering my setup?
Thanks
I can connect the iSCSI LUNs directly to the primary backup server, I was just testing it on a separate server to see if it increased performance. That's also why I setup a separate VM for the proxy as well, just to see if it made a difference from a performance standpoint.
I'm going to try to run everything off a single VM and see if I get the same performance, If so I'll get rid of the other 2 VM's. A lot easier to manager 1 server rather than 3.
Is 75MB/s a decent speed considering my setup?
Thanks
-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Mar 24, 2013 6:18 pm
- Full Name: Tim Stumbo
- Contact:
Re: Direct SAN Access connection on virtual backup server
We have Veeam Essentials licensing that's why we can only use the 3 host in the production cluster. We went ahead and removed the older 4th host so we could stay with the Essentials licensing until we could budget for upgrading our licensing.
That's why I have the isolated 4th host that's not in our production environment. It already had ESXi setup on it so I just figured it would be easier to have B&R as a VM from a management standpoint instead of turning the host into a physical box and installing B&R. I'm willing to do that if it would benefit us I just don't see how it could.
That's why I have the isolated 4th host that's not in our production environment. It already had ESXi setup on it so I just figured it would be easier to have B&R as a VM from a management standpoint instead of turning the host into a physical box and installing B&R. I'm willing to do that if it would benefit us I just don't see how it could.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
The backup proxy using the Direct SAN Access transport mode must have a direct access to the production storage via a hardware or software HBA. Please check this screenshot and compare to your own job session log - if Direct SAN was utilized then log would contain [san]. I suspect that in your case it's [nbd].
Not bad, however a faster storage could increase the speed.Is 75MB/s a decent speed considering my setup?
-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Mar 24, 2013 6:18 pm
- Full Name: Tim Stumbo
- Contact:
Re: Direct SAN Access connection on virtual backup server
Looks like it's utilizing both. Here is a portion of the log..
Code: Select all
[i][18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Detecting san access level
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: disks ['60060160ca913a00b8aeed542e342937565241494420','60060160ca913a0033b2ed5409f3d9c8565241494420','6c81f660e99fed001c89ce7d1611c51a504552432048','60060160ca913a0061b1ed5452afcea0565241494420','60060160ca913a0066ec1355dca7df53565241494420','60060160ca913a00dcafed5461dfc803565241494420','60060160ca913a00acb6ed54016cfab5565241494420','6c81f660e99fed001c89ce5213806420504552432048','60060160ca913a0089b0ed54b5cdb8a6565241494420','60060160ca913a0048b3ed5407276e0f565241494420','60060160ca913a009fb5ed54b5e14f29565241494420','60060160ca913a0040b4ed5423737dbd565241494420','60060160ca913a002b8bf15528241834565241494420','6848f690ee7597001c889c580aff0f1c504552432048','6b083fe0eb0291001c7ef20105648455504552432048']
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: Proxy disk [Local DELL Disk (naa.6c81f660e99fed001c89ce7d1611c51a)] is accessible through san, diskName (vmfs lun) is [naa.6c81f660e99fed001c89ce7d1611c51a], uuid = [02000000006c81f660e99fed001c89ce7d1611c51a504552432048]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Proxy [BACKUPPROXY]: only some disks are accessible through san and can failover to network mode
[18.12.2015 08:51:07] <01> Info (15 proxy disk(s) correspond to san, but vm's disks are on 1 datastores)
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Detected san storage access level for proxy [BACKUPPROXY] - [PartialSan]
[18.12.2015 08:51:07] <01> Info [ProxyDetector] Detected mode [san;nbd] for proxy [BACKUPPROXY][/i]
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
Then, if you are 100% positive that your proxy server is NOT connected via Microsoft iSCSI to your production storage, please open a case with support and post your case ID here because what I can see in your log is not normal.
Thank you.
Thank you.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
Could you please clarify if any of production datastore LUNs are visible from proxy's "Device manager" panel?
Thank you.
Thank you.
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Direct SAN Access connection on virtual backup server
This isn't totally true. I run a virtual backup proxy, and found direct SAN mode to be about 2X as fast as NBD. I use a vmxnet3 vnic on the proxy, which is a virtualized 10gbe nic. I had also played with passing through a 10gbe nic to the virtualized proxy, and that worked extremely well too.PTide wrote: Secondly, there is no point using Direct SAN with virtual proxy because Direct SAN is intended to bypass ESXi I/O stack thus increasing backup performance..
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
If so, may I ask you what kind of NIC were you using as your ESXi management interface?found direct SAN mode to be about 2X as fast as NBD. I use a vmxnet3 vnic on the proxy, which is a virtualized 10gbe nic
Keep in mind that NBD is actually the least efficient method, however it works good on 10Gb. Also with physical proxy the difference in performance direct-SAN vs NBD would be even more noticeable.
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Direct SAN Access connection on virtual backup server
The management NIC is 1gb only. Each host has only one 10gb interface, and that is for storage only. I know physical would be even better, but that would involve spinning up a physical host for only that purpose - not worth it to me...
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
This. If an NBD mode is utilized the system retrieves the VM disks via the ESX(i) management interface. In the case that you've described NBD via 1Gb NIC competes with a direct-SAN via 10Gb NIC, which is not fair Another thing to mention about direct-SAN via 10Gbit vs NBD via 1Gbit - only 2x faster, that's not very impressive.The management NIC is 1gb only.
-
- Veteran
- Posts: 266
- Liked: 30 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
- Contact:
Re: Direct SAN Access connection on virtual backup server
I seem to recall (back when I had only 1gb), that NBD was still quite a bit less performant than direct SAN. It's certainly possible that it would be acceptable performance if I *did* have 10gb for the management interface, but I can't justify it at this time
-
- Influencer
- Posts: 22
- Liked: 2 times
- Joined: Mar 21, 2014 11:41 am
- Full Name: Gareth
- Contact:
Re: Direct SAN Access connection on virtual backup server
There are other issues with NBD. The direct from SAN backup has many advantages. We use a large number of virtualised proxy servers as data movers moving data from HP VSA via stock Window ISCSI vmxnet3 and get very fast backups. We are planning to remove the virtualised proxy servers because we can now get 40Gbps+ between our SAN and our backup repo's which wasn't previously possible (network topology issues). The repo's we are using have ample capacity to undertake the data mover role.PTide wrote:This. If an NBD mode is utilized the system retrieves the VM disks via the ESX(i) management interface. In the case that you've described NBD via 1Gb NIC competes with a direct-SAN via 10Gb NIC, which is not fair Another thing to mention about direct-SAN via 10Gbit vs NBD via 1Gbit - only 2x faster, that's not very impressive.
What danwatz describes is similar to some of our blade configs. So not unusual.
Our installation is not small and we generally get 100% backup success rates.
Regards,
Gareth
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Direct SAN Access connection on virtual backup server
I didn't want it to seem like I was defending NBD, I just wanted to point out that the comparison of 1Gb NBD and 10Gb direct-SAN via virtual proxy is not valid. I totally agree that NBD is the last resort in case no other option is available.There are other issues with NBD. The direct from SAN backup has many advantages
Sounds good. Just don't forget to assign proxy roles to repos. Otherwise, if no suitable proxies detected, your data may flow through Veeam Server (default proxy) or the whole job will failover to network mode. Also keep in mind system requirements.The repo's we are using have ample capacity to undertake the data mover role.
Thank you for sharing your experience!
Who is online
Users browsing this forum: No registered users and 28 guests