Direct SAN slower?

VMware specific discussions

Direct SAN slower?

Veeam Logoby Thernlund » Mon Jun 20, 2016 8:54 pm

I have my backup VM connected directly to the SAN fabric. The idea was to increase backup speed. I'm finding however that the speed is being reported to be about 20m/s slower, and taking 1/3 longer than is did when it wasn't directly connected.

Is this unusual? What are the reasons (generally speaking) that it might be slower vs just being network connected?

The bottleneck is consistently reported as the Source. But that's always been true, even before directly connecting to the SAN.

I have no problem going back to the old way. Being directly connected to the SAN makes my a little uncomfortable anyway. I'm just wondering if I should be concerned that something is broken? Or if this is sometimes just the way it is?

Any general tips are appreciated.
Thernlund
Enthusiast
 
Posts: 29
Liked: 3 times
Joined: Wed Sep 09, 2015 12:02 am
Full Name: Terry Hernlund

Re: Direct SAN slower?

Veeam Logoby PTide » Tue Jun 21, 2016 12:11 pm

Hi,

I have my backup VM connected directly to the SAN fabric.
Do you mean that you connected your VM to a FC using VMDirectPath or it's just an iSCSI connected to the VM? Please elaborate.

Thanks
PTide
Veeam Software
 
Posts: 3022
Liked: 247 times
Joined: Tue May 19, 2015 1:46 pm

Re: Direct SAN slower?

Veeam Logoby Pat490 » Tue Jun 21, 2016 12:40 pm

For us, recently switching from hotadd to direct san, increased backup times and speed greatly :)
Pat490
Expert
 
Posts: 135
Liked: 24 times
Joined: Tue Apr 28, 2015 7:18 am
Location: Germany
Full Name: Patrick

Re: Direct SAN slower?

Veeam Logoby Thernlund » Tue Jun 21, 2016 8:48 pm

PTide wrote:Hi,

Do you mean that you connected your VM to a FC using VMDirectPath or it's just an iSCSI connected to the VM? Please elaborate.

Thanks



I've connected the VM that Veeam is installed on to the iSCSI SAN as described in this article, steps 1, 2, 4, and 5...

https://www.veeam.com/kb1446

...and this blog post...

https://www.veeam.com/blog/using-the-is ... -a-vm.html

I can confirm in the backup status that it did change from 'hotadd' to 'san'.


EDIT: Here's a couple log snippets of an individual VM that's mostly indicative of the general experience across the whole job...

Example VM w/ Direct SAN...

Code: Select all
6/19/2016 9:03:56 PM :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [san]
6/19/2016 9:04:07 PM :: Hard disk 1 (1.5 TB) 110.6 GB read at 31 MB/s [CBT]
6/19/2016 10:04:57 PM :: Removing VM snapshot
6/19/2016 10:19:02 PM :: Saving GuestMembers.xml
6/19/2016 10:19:06 PM :: Finalizing
6/19/2016 10:19:07 PM :: Truncating Exchange transaction logs
6/19/2016 10:19:21 PM :: Swap file blocks skipped: 3.0 GB
6/19/2016 10:19:22 PM :: Busy: Source 97% > Proxy 82% > Network 13% > Target 5%
6/19/2016 10:19:22 PM :: Primary bottleneck: Source
6/19/2016 10:19:22 PM :: Network traffic verification detected no corrupted blocks


Example VM w/ hotadd...

Code: Select all
6/20/2016 9:04:12 PM :: Using backup proxy VMware Backup Proxy for disk Hard disk 1 [hotadd]
6/20/2016 9:05:08 PM :: Hard disk 1 (1.5 TB) 136.1 GB read at 44 MB/s [CBT]
6/20/2016 9:58:49 PM :: Removing VM snapshot
6/20/2016 10:13:04 PM :: Saving GuestMembers.xml
6/20/2016 10:13:09 PM :: Finalizing
6/20/2016 10:13:11 PM :: Truncating Exchange transaction logs
6/20/2016 10:13:28 PM :: Swap file blocks skipped: 3.0 GB
6/20/2016 10:13:30 PM :: Busy: Source 93% > Proxy 71% > Network 16% > Target 13%
6/20/2016 10:13:30 PM :: Primary bottleneck: Source
6/20/2016 10:13:30 PM :: Network traffic verification detected no corrupted blocks


Not that that's the first VM in the job. I have two threads running in this job. The job usually wraps up right as this VM completes with hotadd. It goes on about 20 minutes longer with san access.
Thernlund
Enthusiast
 
Posts: 29
Liked: 3 times
Joined: Wed Sep 09, 2015 12:02 am
Full Name: Terry Hernlund

Re: Direct SAN slower?

Veeam Logoby PTide » Wed Jun 22, 2016 12:23 pm

All network traffic that is going to a VM is passed through ESXi's network stack which gives some overhead, depending on how the overall load that is going through physical NICs. Although you can use a VM for direct access it is better to use physical machine as a direct access proxy.

Thanks.
PTide
Veeam Software
 
Posts: 3022
Liked: 247 times
Joined: Tue May 19, 2015 1:46 pm

Re: Direct SAN slower?

Veeam Logoby Thernlund » Wed Jun 22, 2016 8:03 pm

That had occurred to me a while back, and it does make sense. I guess I just wanted to see.

I'm probably just going to go back to the regular way. No big deal. :-)

Thanks!
Thernlund
Enthusiast
 
Posts: 29
Liked: 3 times
Joined: Wed Sep 09, 2015 12:02 am
Full Name: Terry Hernlund


Return to VMware vSphere



Who is online

Users browsing this forum: Bing [Bot] and 19 guests