Host-based backup of VMware vSphere VMs.
Post Reply
myrandomuseraccount9
Novice
Posts: 3
Liked: never
Joined: Mar 09, 2020 6:46 pm
Full Name: C M G
Contact:

Direct Storage Access vs Virtual Appliance Performance

Post by myrandomuseraccount9 »

Hi,

I recently configured our Veeam server and Proxies to use Direct SAN, since I read that this will give the best performance for for backups. We were using the "Virtual Appliance" transport mode previously and we were seeing around 50-70 MB/s on backups. When we run jobs using Direct SAN now, I'm seeing around 10-20 MB/s. This is kinda the opposite of what I thought would happen.

However, when I run Direct SAN, the total job time seems a little shorter, which I am attributing to the job not having to mount the VM (hotadd) before backing up. Not sure if that assumption is correct?

When i setup Direct SAN, I was expecting over 100 MB/s. Is that a reasonable expectations? Has anyone noticed that transfer speeds go down when using Direct SAN?

I opened a case with Veeam support to identify possible bottlenecks, but they couldn't find one. We ran a bigger job to get good sample data and the summary looked like this: Source: 62, Proxy 8, Network 43, Target 44. So, it seems like the job is balanced and not hitting any single bottleneck.

I'm just trying to understand why Direct SAN would be slower and if there is anything special I need to do to improve performance?

When I setup Direct SAN, I used iSCSI initiator to connect to my Source and Destination iSCSI targets. I am using the same NIC and only a single connection (per IP) to the iSCSI targets for this setup. Does that matter? If I was running into an iSCSI issue, I would think that maybe my performance would be capped at a round number, like 100MB/s, based on the NIC. Is that a correct assumption?

I also setup Direct SAN on a VM, even though Veeam recommends setting up on a hardware server. I expected a little lower performance, but not as low as what I am seeing. Does anyone think this issue is related to it being a VM?

Any recommendations or insights on optimizing Direct SAN would be greatly appreciated! Thanks!
PetrM
Veeam Software
Posts: 3622
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Direct Storage Access vs Virtual Appliance Performance

Post by PetrM » 1 person likes this post

Hello,

The "bottleneck" always exists in every system and this is the "source" according to the statistics provided: "Source: 62, Proxy 8, Network 43, Target 44".
Most likely data read speed is limited by iSCSI channel throughput, it is unlikely that any single system or virtual machine (proxy in your case) gets full use of the network speed.
On the other hand, 10 - 20 Mb/s seems to be significantly lower than expected.

I'd recommend to do the following:
1. You could check dropped packets statistics on a switch, when the switch discards packets with any regularity, network throughput suffers significantly.
Please take a look at this article for more information.
2. If it's possible in your environment, you may try to provide a dedicated path between proxy and storage system to minimize load provoked by other services which may use the same network during backup.
3. You may request for the escalation of the support case. It makes sense to perform read benchmark test using special VDDK tool which allows reading data in the corresponding transport mode, this test will show maximal achievable speed in SAN mode for your environment.

Thanks!
Andreas Neufert
VP, Product Management
Posts: 7076
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Direct Storage Access vs Virtual Appliance Performance

Post by Andreas Neufert » 2 people like this post

Virtual Appliance (HotAdd), DirectNFS and Backup from Storage Snapshots are the fastest transport modes as they use asyncronous IO processing which allow the Storage System to opmize the read and writes.
Direct SAN has to go through a specific VMware component (VDDK) integrated into our product and can only do syncronous IO processing which basically send one IO wait for feedback and then send the next one.
In an ideal world DirectSAN can be 2-3x slower depending on the load of the storage controller. The more load the better performance plus for the asyncronous processing.

DirectSAN has the advantage that you do not have to read through the Hypervisor which can help to reduce the overhead on them and as well in some cases increase speed if the datapath to the hypervisor is the bootleneck.

So it depends on the situation.
Regarding the actual speed. Our Software is able to handle multipe GB/s processing if the source, target and the network is fast enough.
Check if the datapath between Veeam and the DirectSAN Storage is set correctly. Sometimes customer transport data on the slow management networks instead of through the backup network.
Petr gave you some good tips to troubleshoot the Source Read side (as this was identified as bottleneck).
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google Adsense [Bot], Semrush [Bot] and 55 guests