Host-based backup of VMware vSphere VMs.
efschu2
Lurker
Posts: 2
Liked: 1 time
Joined: Jun 08, 2022 12:53 pm
Full Name: efschu
Contact:

Linux Backup Proxy Multipath Direct storage access transport mode

Post by efschu2 » 1 person likes this post

I'm using a debian backup proxy, but the veeamagent is using the /dev/sd* disks for direct storage access transport mode (and due this only a single path) instead of the /dev/mapper/* or /dev/dm-* with configured and working multipath.

Is this a limitation of veeam, or did I miss something.

Running version is 11.0.0.837

Regards

Matthias
Mildur
Product Manager
Posts: 9716
Liked: 2565 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by Mildur »

Hi efschu

What sort of storage protocol is used in your environment?
I assume it's iSCSI?

Thanks
Fabian

Update:
You should upgrade your VBR server V11 to V11a with the most current patch as soon as possible. You are running a build with critical security vulnerabilities.
https://www.veeam.com/kb4245
Product Management Analyst @ Veeam Software
efschu2
Lurker
Posts: 2
Liked: 1 time
Joined: Jun 08, 2022 12:53 pm
Full Name: efschu
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by efschu2 »

I'm using iSER (iSCSI Extensions for RDMA) connected to a SCST target and a FUJITSU DX100 connected via FC.
Mildur
Product Manager
Posts: 9716
Liked: 2565 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by Mildur »

Hi efschu

I‘ve talked to the team.
Multipath for iSCSI and FC will be used, but depends on the correct multipath configuration on your linux proxy server.
If you think, that multipath is correctly configured, I recommend to open a support case to check the debug logs.

Thanks
Fabian
Product Management Analyst @ Veeam Software
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

i'm running into some issues regarding this as well. Can you elaborate on what "correct multipath configuration" would look like for a linux proxy server?
Multipathing is working fine at the moment, but i see some unexplainable behavior. I tested the setup with a single datastore with a single VM, and all went good. I added 10 more datastores for the proxy to read. VMs on these new ones all fail because no proxy is able to access the datastores according to Veeam. i rescanned everything i could find, rebooted everything but no luck.
I notices the VBR console history pane shows some storage discovery warnings. It mentions some LUNs that would not have proxies able to access them. Very weird, because it should have access.

I still suspect multipathing here to be a problem with mapper devices being used, but it would not explain why i can backup 1 single test datastore just fine.

can i debug this a bit further myself somehow without contacting veeam support (for now)? tips?
Veeam Certified Engineer
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

to add: i now noticed the failures occur after a second array is introduced as a iscsi portal/target and sessions are active to the both of them. I can backup VMs on one of the arrays. So i narrowed it down a bit it seems, but still confused. Because under identical initiator setup for both arrays, i've seen backups succeed from both, but not when both are serving iscsi sessions to the proxy.

could it be the case some scanning logic is not behaving properly when multiple targets/portals (arrays) are connected to the same initiator?
Veeam Certified Engineer
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

ok couldn't let it go. I got it to work by explicitely selecting the datastores i was trying to backup (coming from both arrays) under properties of the VMware proxy role of the physical machine doing the backup. I selected both datastores as the ones this proxy has access to instead of the default and after that, veeam got things right.

Before doing this, Veeam was not able to properly detect one of the datastores as being connected to the proxy. The datastore was properly connected and visible on the OS side, but i could see the process of veeam checking for proxy datastore access fail when a job was started on one of the datastores. But, this alternated between each of datastores. Not consistent behavior. Only after selecting the datastores as connected datastores for the specific proxy under proxy settings, veeam was able to properly detect the SCSI ID's of both datastores that were connected consistently.

Not sure about which parts of this work as designed and which parts don't.
Veeam Certified Engineer
Mildur
Product Manager
Posts: 9716
Liked: 2565 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by Mildur »

Hi Jay

Thanks for your testing.
Can you elaborate on what "correct multipath configuration" would look like for a linux proxy server?
That depends on your system. I would follow the recommendations of the storage appliance vendor on how to configure multipathing for their system

I got it to work by explicitly selecting the datastores i was trying to backup (coming from both arrays) under properties of the VMware proxy role of the physical machine doing the backup
That sounds wrong. If you like the reason for this behavior, please open a support case and let me know the case number for our reference. Logs should tell our support team what happened in the background.

Thanks
Fabian
Product Management Analyst @ Veeam Software
HannesK
Product Manager
Posts: 14759
Liked: 3044 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by HannesK »

Hello,
sorry for being late to this thread, but there is a limitation from VMware side with multipathing (MPIO). The release notes for V11a / V12 say that
Linux-based backup proxies with configured multipath do not work in DirectSAN
For path-failover, multipathing can actually work. "Can" means, that there are a few Linux distributions (Red Hat, Centos, SUSE) that VDDK supports. Load balancing is never supported by VMware VDDK. That's something we cannot influence.

Best regards,
Hannes
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

ok not sure if i ever picked up those lines before.
However, it (using directsan and iscsi+multipathd) seems to work fine now on my ubuntu installation.
The way it's phrased in the release notes as "do not work" are a bit off right? the line reads as if it cannot work in any way. Needs some re-phrasing and at least some kind of mentioning in the helpcenter/docs on a location like https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Veeam Certified Engineer
HannesK
Product Manager
Posts: 14759
Liked: 3044 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by HannesK »

yes, we are updating the sentence.
However, it (using directsan and iscsi+multipathd) seems to work fine now on my ubuntu installation.
you mean, that you have full load balancing, or that it works for backup?
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

I have it working fine for backup with recommended multipath config file for Nimble storage. Mpath sees 4 paths for each volume, two active (alua). This policy seems to do some kind of round robin for the active ones.
Veeam Certified Engineer
ArturE
Veeam Software
Posts: 7
Liked: 4 times
Joined: Jan 26, 2023 2:30 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by ArturE »

Hi Jay!

I work as a QA Engineer for Veeam and I have researched this topic in the past couple of weeks. It would be nice to have some feedback from a real-world setup, so I want to ask for your help with some information. Mainly, to check iostat on your Ubuntu backup proxy during a backup job (using DirectSAN) and see if the data is read in a round-robin fashion through multiple device nodes. So, for example:

If multipath -ll outputs these paths:

Code: Select all

3642a9530bbh12a75527f430s002bd822 dm-2 MySANLun
size=50G features='0' hwhandler='1 alua' wp=rw
`+ policy='round-robin 0' prio=50 status=active
  |- 36:0:0:1 sdi 8:128 active ready running
  |- 34:0:0:1 sdg 8:96  active ready running
  |- 38:0:0:1 sdk 8:160 active ready running
  |- 33:0:0:1 sdf 8:80  active ready running
Then you should check iostat and see how the majority of the kB_read is distributed: evenly, through the sdX device nodes (sdi, sdg, sdk, sdf in my example above), or using a single device node (e.g. sdg)

Thanks,
Artur
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt » 1 person likes this post

Hi Artur,

I'll try to get some of that info! Will report back, i hope somewhere this week.
Veeam Certified Engineer
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

Code: Select all


# example of the multipath -ll output for one of the Nimble volumes with 4 active paths.

size=8.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 4:0:0:0  sde       8:64   active ready running
  |- 1:0:0:0  sdb       8:16   active ready running
  |- 3:0:0:0  sdbh      67:176 active ready running
  `- 2:0:0:0  sdbi      67:192 active ready running 
  
#multipath.conf file contains the following for the Nimble devices

devices {
        device {
                vendor "Nimble"
                product "Server"
                #path_grouping_policy group_by_prio
                prio "alua"
                hardware_handler "1 alua"
                path_selector "round-robin 0"
                #path_selector "service-time 0"
                #path_selector "queue-length 0"
                path_checker tur
                no_path_retry 30
                failback immediate
                fast_io_fail_tmo 5
                dev_loss_tmo infinity
                rr_min_io_rq 1
                rr_weight uniform
        }
}
i've got a script running for 24 hours thats logs the iostat output every minute for all 4 sdX devices of one of the Nimble datastores presented to the proxy. Will check back tomorrow.
Veeam Certified Engineer
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

got the output, i can send it in DM Perhaps. iostat was run on the 4 devices every minute during 24 hours. What's unexpected, is that during backup, it only seems to use a single device out of the 4 that make up the multipath device. so even though the policy is round-robin and the devicemapper device has 4 active paths / devices, a single device is used to read.
Veeam Certified Engineer
ArturE
Veeam Software
Posts: 7
Liked: 4 times
Joined: Jan 26, 2023 2:30 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by ArturE »

Hi Jay,

I appreciate you coming back with this info! This behavior is not really unexpected - you got the same results as I did during my investigation. Our VMware backup jobs using DirectSAN on a Linux proxy are leveraging the SAN transport mode that VMware's virtual disk library (VDDK) implements. VDDK itself selects which device node to use; it's not very transparent about it though, and even if the library comes with some advanced configuration parameters, I did not get it to use the load balancing feature of MPIO.

It's worth noting that VMWare also mentions this information in their documentation (for iSCSI and for FCoE):
You cannot use virtual-machine multipathing software to perform I/O load balancing to a single physical LUN
The way it is phrased suggests that load balancing might be possible on a physical Linux proxy; but I have found that, even in this case, it is still using the VDDK library and therefore, MPIO during DirectSAN is subject to the same limitations: the path failover is functional, but load balancing is not.

Thanks again for the confirmation from your infrastructure!

Best regards,
Artur
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

Hi Artur,

Thanks for this information. Hmmz ok VDDK is operating one level lower than we'd probably like to see.
I do wonder: in my case, i have 4 active paths due to the architecture of Nimble (there are no "non-optimized paths" to the standby controller in case of iSCSI, only fully active optimized paths to the active controller).
However, other arrays can offcourse present 2 active paths and 2 standby/non-optimized paths, which will all end up under a single multipath device as well (though, output from multipath -ll is different in that case).
Do you think the logic of VDDK is smart enough to select the underlying device that is an active path? And not the non-optimized ones?
I must say, i've not seen issues with it in the past on arrays with this architecture and MPIO (Windows in that case) in use. So i think it's either smart enough or it gets its information on SCSI level to not use the non-optimized ones.
Veeam Certified Engineer
ArturE
Veeam Software
Posts: 7
Liked: 4 times
Joined: Jan 26, 2023 2:30 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by ArturE »

Hi Jay,

For SAN transport to work, the storage needs to be presented to the ESXi host as a VMFS datastore connected by iSCSI or FCoE in the first place. The ESXi host does detect which paths are Active/Standby/Disabled/Dead - you can check this information in the vSphere client on the ESXi host > Configure > Storage Devices > Paths. It also has an automatic path selection policy; from what I gathered the default setting is agreed between VMware and the storage vendor, and it is usually documented by the latter.

So, the VDDK library probably uses the information available to the ESXi host when selecting a path for I/O. But I'm mostly inferring this information from my tests, so take it with a grain of salt. As I mentioned before, VDDK is not very transparent and all of this does fall a bit outside the scope of Veeam's DirectSAN & Linux proxies.

Best regards,
Artur
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

I did know about the ESXi host PSP aspects, but that's a new way of thinking for me regarding a way for VDDK communicating with other parts(hosts) in the vSphere environment to determine the path to use. But honestly, i dont think this is the case though, because all things regarding end-to-end path connectivity can differ alot between an ESXi host and the DirectSAN proxy (amount of paths, ports used on the array etc etc.) and i doubt VDDK can map that some how to make a decision on the path to use.
Maybe one of my next projects will shed some light on this. Will keep it in mind.
Veeam Certified Engineer
maol
Service Provider
Posts: 14
Liked: 1 time
Joined: Feb 28, 2018 7:45 am
Full Name: Lorenzo Mao
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by maol »

Hello Guys,
did anyone of you get an official response from veeam?
I've opened Case #06157031 to have the support checking on that.
I've searched all around documentation and best practices but multipathing is not even mentioned....
Lorenzo
HannesK
Product Manager
Posts: 14759
Liked: 3044 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by HannesK »

Hello,
the answer above from ArthurE is the answer. If you like, I can ask support to answer that to you (if support would escalate to R&D, they would get the same answer from Arthur internally and then get it back to you)
I've searched all around documentation and best practices but multipathing is not even mentioned....
which makes sense according to the information provided above :-)

Best regards,
Hannes
maol
Service Provider
Posts: 14
Liked: 1 time
Joined: Feb 28, 2018 7:45 am
Full Name: Lorenzo Mao
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by maol »

Thanks Hannes,

I've read the answer of ArthurE and yes, I think is right but he's saying "probably" i would like R&D to better investigate with vmware to clarify the logic used behind path selection...
So, the VDDK library probably uses the information available to the ESXi host when selecting a path for I/O. But I'm mostly inferring this information from my tests, so take it with a grain of salt. As I mentioned before, VDDK is not very transparent and all of this does fall a bit outside the scope of Veeam's DirectSAN & Linux proxies.
and, as it says that load balancing is not supported, I would like to raise a feature request here!
For path-failover, multipathing can actually work. "Can" means, that there are a few Linux distributions (Red Hat, Centos, SUSE) that VDDK supports. Load balancing is never supported by VMware VDDK. That's something we cannot influence.
thanks
Lorenzo
nt25
Novice
Posts: 4
Liked: never
Joined: Aug 15, 2023 10:19 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by nt25 »

Hi, I have been searching about this and found that Veritas NetBackup could do multipathing by configuring a variable in the VixDiskLib_InitEx function, is there any chance Veeam could have something like this?

I have added these lines to the files /opt/veeam/transport/vddk/vmc_config.ini, /opt/veeam/transport/vddk_6_7/vmc_config.ini and /opt/veeam/transport/vddk_7_0/vmc_config.ini

vixDiskLib.transport.san.blacklist=all
vixDiskLib.transport.san.whitelist=/dev/dm-8,/dev/dm-9 (these are the device nodes assigned to the iscsi volume)

Proxy still using single device /dev/sdX, perhaps I'm touching the wrong files

Regards
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

just curious. What's your main reason to have it use a DM-# device instead of one of the available /dev/sdX devices? I can think of some reasons, but i'm curious about your motivations to get it to use the dm-# devices.
Veeam Certified Engineer
nt25
Novice
Posts: 4
Liked: never
Joined: Aug 15, 2023 10:19 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by nt25 »

Well, maybe I'm writing nonsense but I think these dm-* nodes are the multipath devices, I also tried to use the /dev/mapper/ path but the devices name were the uuids

I have changed the whitelist variable to /dev/sda, expecting it to fail, but the backup just worked as always.
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

you're correct. those dm devices are the multipath devices. but what would be the main reason to use them where backup is the sole usecase ?
Veeam Certified Engineer
nt25
Novice
Posts: 4
Liked: never
Joined: Aug 15, 2023 10:19 pm
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by nt25 »

Because I want to increase throughput and reduce backup times
JaySt
Service Provider
Posts: 453
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by JaySt »

Ok. Yes, valid one and a big disadvantage of vddk not playing nice with multipathing. Scaling throughput can be a pain at a certain point
Veeam Certified Engineer
PavelHeidrich
Lurker
Posts: 1
Liked: never
Joined: Aug 20, 2023 9:21 pm
Full Name: Pavel Heidrich
Contact:

Re: Linux Backup Proxy Multipath Direct storage access transport mode

Post by PavelHeidrich »

Hello, I have come to the very same findings in our customer's environment when investigating why their backup speeds are limited by 1GB/s. They are backing up from storage snapshots on Pure Storage all-flash array via Linux proxy using iSCSI. Maximum throughput should be limited by 4x10GbE ports, but the proxy is only using one /dev/sdX device instead of the properly configured multipath device.

Code: Select all

vmpx3:~$ sudo multipath -ll
3624a9370b21b8095b7bb4bc60001b4ad dm-1 PURE,FlashArray
size=1.0G features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 35:0:0:1 sdd 8:48 active ready running
  |- 38:0:0:1 sdf 8:80 active ready running
  |- 37:0:0:1 sde 8:64 active ready running
  `- 36:0:0:1 sdc 8:32 active ready running

Code: Select all

multipath.conf
defaults {
    user_friendly_names    yes
    polling_interval       10
}

blacklist {
    device {
        vendor ".*"
        product ".*"
    }
}

blacklist_exceptions {
    device {
        vendor "NVME"
        product "Pure Storage FlashArray"
    }
    device {
        vendor "PURE"
        product "FlashArray"
    }
}

devices {
    device {
        vendor                   "NVME"
        product                  "Pure Storage FlashArray"
        path_selector            "queue-length 0"
        path_grouping_policy     group_by_prio
        prio                     ana
        failback                 immediate
        fast_io_fail_tmo         10
        user_friendly_names      no
        no_path_retry            0
        features                 0
        dev_loss_tmo             60
    }
    device {
        vendor                   "PURE"
        product                  "FlashArray"
        path_selector            "service-time 0"
        hardware_handler         "1 alua"
        path_grouping_policy     group_by_prio
        prio                     alua
        failback                 immediate
        path_checker             tur
        fast_io_fail_tmo         10
        user_friendly_names      no
        no_path_retry            0
        features                 0
        dev_loss_tmo             600
    }
}
Post Reply

Who is online

Users browsing this forum: No registered users and 20 guests