Host-based backup of VMware vSphere VMs.
Post Reply
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Is this performance expected?

Post by Mehnock »

Hi,

I am new to Veeam Backup & Replication, I have VM replication job and it's running at 3 MB/s, is this expected of my setup (Multipath 10Gbps SAN)?

My setup:
ESX 7 hosts
10 Gb storage network
100 Mb management network
TrueNAS ISCSI vmfs store (all flash) for running VMs
TrueNAS ISCSI vmfs store for replicated VMs
Ubuntu 20 Server as Backup proxy
All machines have a 100Mbs NIC for management and two 10Gbps NIC for accessing storage (ISCSI extents published on two VLANs)
The Veeam Backup Server is connected only to the Management network, not to the storage network.
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Hi, yes I think so. From your performance numbers it seems you're using NBD transport for the target host, and NBD always uses the management network which is 100 Mb in your case. Thanks!
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

This is a VM replication job, the backup proxy is set for direct storage access method and I expected it to go from one ISCSI extend to another, all within the 10Gbps SAN.

What do I need to set my config to do that then?
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Direct SAN access transport mode is not compatible with replica target as the data does not go into VMDK.

Using hot add transport should make initial replication faster for you, this requires a backup proxy VM on the target host.
But if I remember correctly, incremental runs MUST use NBD with your vSphere version due to some recent changes in vSphere.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

Gostev, I may not be understanding this right and I appreciate your input and your patience.

According to this article (https://helpcenter.veeam.com/docs/backu ... ml?ver=110) Direct Storage access can be used for replication jobs (jobs that clone the original VM into another datastore - i.e. data is copied from one vmdk to a new one inside the replicated VM folder).

So the data should stay within the SAN (at 10 Gbps). The article also states the extend must be mounted to the backup proxy and be visible in Disk Management (this only applies to a Windows based Backup proxy).

But I'm using a Linux based Backup Proxy. How does this work when using Linux?
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

BTW I'm using licensed VSphere Standard hosts and VCenter Standard
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Do you use thick provisioned disks for the replicated VMs? I assumed thin provisioned disks, which Direct SAN transport mode is not compatible with in principle.
In regards to your scenario, Linux-based proxy is no different from Windows-based backup proxy in V11 or later.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

I'm pretty sure the disks are thin provisioned but I can convert. Though it may not be worth to convert since it seems only the first replication job is going to go on direct Storage access mode.

So if Linux is the same in v11 and on, are there instructions for mounting the vmfs datastores read-only as in Windows or is this done automatically by the Proxy?
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

I should put the management network on a 10Gb Vlan too then.
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Indeed, this should be the easiest and it will benefit both initial and incremental replication runs.
Mehnock wrote: Oct 27, 2021 4:43 pmare there instructions for mounting the vmfs datastores read-only as in Windows or is this done automatically by the Proxy?
This is not something you need to worry about doing even for Windows proxies when using Veeam, as we take care of that automatically when provisioning a backup proxy.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

So, I only need the Backup server and the proxies on this 10Gb management VLAN correct? Not the ESX hosts.

Also, to clarify my understanding: the NBT mode means the data travels from the VMDKs to the proxy, then to the backup server, then back to the proxy and finally to the clone VMDKs correct?
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Please review the Processing Modes section of the sticky FAQ topic.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

I did and still had the same questions. If both the source and destination vmdks are mounted on the same proxy, I don't understand why the data has to be sent to the backup server then back to the proxy. Maybe on a future update you can give more processing responsibility to the proxy.
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Is this performance expected?

Post by soncscy »

Mehnock wrote: Oct 27, 2021 9:11 pm So, I only need the Backup server and the proxies on this 10Gb management VLAN correct? Not the ESX hosts.

Also, to clarify my understanding: the NBT mode means the data travels from the VMDKs to the proxy, then to the backup server, then back to the proxy and finally to the clone VMDKs correct?
It depends, but probably not.

Break up your environment into Proxies (retrieve data from production Vmware) and Repositories/Repository Gateways.

Proxies read data and pass it to the repository (gateway) to write to the target storage.

If your Proxy is the same machine as your Repository (gateway), the data is transferred via shared memory on that server.
If the proxy is different than the repository (gateway), then network between the proxy and repository (gateway) is used.

For replica, same idea applies: If both the source and target proxy are the same machine, it goes through shared memory. If these are different machines (and likely you want this for more distributed setups), then it goes over network.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

Yeah, that is what I thought but then my original question comes back. I have only one site, as described above. My proxy is connected via 10Gb to the source and destination datastores. I would expect the data to flow from the source vmdk through memory to the destination vmdk (the log shows the same proxy was used for source and destination drives) but the data rate is 3-5 Mb/s. Is this rate expected if the network is 10Gb and the data is flowing in memory through the proxy?
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Is this performance expected?

Post by soncscy »

What's the bottleneck?

It sounds like it's going through that 100 Mbit management interface, and you might need to do some hosts edits to force it through the 10Gbit, or just put the mgmt iface on 10Gbit
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

This was a first run of a never replicated server, the others run incremental backups and run at 27Mb/s, this one was very slow at 3Mb/s

10/26/2021 9:35:45 AM :: Queued for processing at 10/26/2021 9:35:45 AM
10/26/2021 9:35:55 AM :: Required backup infrastructure resources have been assigned
10/26/2021 9:44:58 AM :: VM processing started at 10/26/2021 9:44:58 AM
10/26/2021 9:44:58 AM :: VM size: 1000 GB (844.8 GB used)
10/26/2021 9:45:00 AM :: Discovering replica VM
10/26/2021 9:45:00 AM :: Getting VM info from vSphere
10/26/2021 9:45:08 AM :: Inventorying guest system
10/26/2021 9:45:14 AM :: Preparing guest for hot backup
10/26/2021 9:46:05 AM :: Releasing guest
10/26/2021 9:46:05 AM :: Creating VM snapshot
10/26/2021 9:46:05 AM :: Getting list of guest file system local users
10/26/2021 9:46:09 AM :: Processing configuration
10/26/2021 9:46:53 AM :: Creating helper snapshot
10/26/2021 9:47:08 AM :: Using source proxy xxx.xxx.xxx.44 for disk Hard disk 1 [nbd]
10/26/2021 9:47:08 AM :: Using source proxy xxx.xxx.xxx.44 for disk Hard disk 2 [nbd]
10/26/2021 9:47:08 AM :: Using target proxy xxx.xxx.xxx.44 for disk Hard disk 1 [nbd]
10/26/2021 9:47:08 AM :: Using target proxy xxx.xxx.xxx.44 for disk Hard disk 2 [nbd]
10/26/2021 9:47:09 AM :: Hard disk 1 (200 GB) 147.5 GB read at 3 MB/s [CBT]
10/26/2021 9:47:09 AM :: Hard disk 2 (800 GB) 630 GB read at 4 MB/s [CBT]
10/28/2021 6:42:38 AM :: Removing VM snapshot
10/28/2021 6:51:19 AM :: Deleting helper snapshot
10/28/2021 6:52:30 AM :: Deleted file blocks skipped: 44.8 GB
10/28/2021 6:52:32 AM :: Finalizing
10/28/2021 6:52:38 AM :: Busy: Source 51% > Proxy 0% > Network 49% > Target 99%
10/28/2021 6:52:38 AM :: Primary bottleneck: Target
10/28/2021 6:52:38 AM :: Network traffic verification detected no corrupted blocks
10/28/2021 6:52:38 AM :: Processing finished at 10/28/2021 6:52:38 AM
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Is this performance expected?

Post by soncscy »

The bottleneck there is target, so it's the connection from the proxy => target datastore. since it's NBD, that would mean it's the vmkernel port going slow here, so look at which NICs are being used.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

It's the 10Gb NIC that is in use.

I have the production and backup vmfs stores on TrueNAS boxes, shared via ISCSI on two separate VLANs, the proxy has also two 10Gb cards connected to the same VLANs. The production storage is all Flash, the destination is 7200 spinning drives.

To me it makes sense for the target to be the bottleneck as it's the slowest device. But getting 3 Mb/s seems too slow. It seems the initial replication went through the management network at 100Mbps but it should have stayed within the proxy.

I just started another job and this time all VMs have a replica so the job is incremental and it's also going through the management interface. Data is being copied from the SAN at 10 Gb through the proxy, then goes to the backup server at 100 Mb, goes back to the proxy at 100 Mb and then to the backup storage at 10Gb.

I'm going to put the backup server on a 10Gb link but that should not need to be the case as the proxy should copy from one vmdk to the other without sending the data to the backup server.
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev »

Since with replication jobs the data is sent directly from a source proxy to a target proxy, it will pass through the backup server only in one condition: when backup server carries a source or a target proxy role.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

soncscy wrote: Oct 28, 2021 1:12 pm The bottleneck there is target, so it's the connection from the proxy => target datastore. since it's NBD, that would mean it's the vmkernel port going slow here, so look at which NICs are being used.
Wait a minute... are you saying the ESX machine is in play here? I thought the Proxy would connect to the ISCSI storage directly so the host server was not involved.
Mehnock
Influencer
Posts: 20
Liked: 4 times
Joined: Oct 27, 2021 12:10 pm
Full Name: Christopher Navarro
Contact:

Re: Is this performance expected?

Post by Mehnock »

The log shows the following, is the proxy connecting to the vmdks via the VMWare server?
10/28/2021 8:58:02 AM :: Using source proxy xxx.xxx.xxx.42 for disk Hard disk 1 [nbd]
10/28/2021 8:58:02 AM :: Using target proxy xxx.xxx.xxx.42 for disk Hard disk 1 [nbd]
10/28/2021 8:58:04 AM :: Hard disk 1 (100 GB) 8.1 GB read at 13 MB/s [CBT]
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Is this performance expected?

Post by Gostev » 2 people like this post

Mehnock wrote: Oct 28, 2021 1:43 pm I thought the Proxy would connect to the ISCSI storage directly so the host server was not involved.
So you still didn't read the sticky FAQ topic.... the data path is explained in the very first question of the NBD transport section.
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: Is this performance expected?

Post by soncscy »

it's as Anton says, replicas are a direct connection/writing to vmdks on the datastore. It's a proxy => host connection, as I wrote.
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 54 guests