Host-based backup of VMware vSphere VMs.
Post Reply
coolsport00
Veeam Legend
Posts: 81
Liked: 14 times
Joined: Sep 11, 2012 12:00 pm
Full Name: Shane Williford
Location: Missouri, USA
Contact:

Specifically How to Configure Ubuntu Linux for BfSS or DirectSAN using iSCSI

Post by coolsport00 »

I came across the following forum thread while searching the Web on how to explicitly configure Linux to use BfSS or DirectSAN as there isn't any official/supported config document from Veeam on how to do so:
vmware-vsphere-f24/linux-backup-proxy-m ... 81187.html
... I was going to add this comment there, but the above post was moreso about how multipath does/doesn't work when using iSCSI with Linux and configured for DirectSAN mode, and not really about how to configure it. So I created this post.
That is a FANTASTIC thread btw! Really appreciate all the input shared, especially by @hannesk and @arturE for their PM/QA input.

I'm looking for definitive and supported configuration for Linux to use Direct SAN (Storage) and BfSS. I'm looking to implement Ubuntu Linux as Proxies in my newly-created VBR environment using iSCSI. The User Guide has about 0 guidance on how to get this done for DirectSAN (Storage). Sure, there are some requirements and limitations noted in a few spots, but the actual Linux configuration isn't there. I was mostly going to implement for BfSS on Linux as I have in my current/old environment (but currently use Winodws). But, I was still very curious how to configure a Linux server & Nimble to use DirectSAN as there's no doc anywhere how to do this. Honestly, there isn't one for Windows either. Anyway, I kinda wanted to see which method on Linux, BfSS or DirectSAN, would perform better. When I configured Windows 5-6yrs ago or so, I had to find a few blog posts on the Web on how to configure Windows to use DirectSAN/BfSS. Would be nice to have a supported Veeam document on how to configure BfSS & DirectSAN on both Windows and Linux. I understand with Linux there are different distributions (thus, some difference in pkg names and pkg managers), and there are many storage vendors (with different configurations), but the explicit/detailed configurations for each storage vendor doesn't have to be explicitly detailed and can instead be from a high-level (i.e. "configure your storage Volumes/LUNs to do xyz", etc).

My main question for DirectSAN is 1. how do I configure the VM prod array? And, in my case, I have a Nimble. Specifically, what Access (aside from Linux server IQN) do I give each datastore Volume? My guess is I configure the Linux server access to use both "Volume & Snapshot"? For BfSS you only need to configure "Snapshot". I tested BfSS with the "Snapshot" access and that works; and it does appear multipathing works as well. On the Linux server, for DirectSAN do I just run a 'target discover' cmd only as is needed for BfSS, or do I also need to perform a target "login" operation? Also, Hannes, in the above thread, said DirectSAN isn't really supported with physical Linux servers? Or, maybe more specifically...multipathing? Should that be added in the User Guide? Or, referenced in a KB and linked to from the User Guide? I think that's kind of a big deal IMO.

And all this may even be for naught. Maybe I don't need to implement the complexity of a phys Linux server with iSCSI connection to my SAN? Just thought I'd get better performance than Windows BfSS, as well as a little less maintenance overhead and probably a bit better security footprint than Windows? Maybe using Linux VM Proxies (i.e. hotadd) are good enough? I understand which to use will warrant the good old "it depends on your environment" statement. :) My environment is not big (250 or so VMs backing up), and "standard" change rate, etc. It appears from what info @JaySt shared in that forum post I linked to above...hotadd may be the way to go? Maybe I can just add both phys and VM proxy, configure a job to do BfSS for a few days, then reconfigure the same job to use hotadd a few days then see the results of each method. I don't really have a test environment to do this, but maybe I can test it out on my old environment first to see.

Just looking 1. for Linux config info for DirectSAN; any other thoughts I may be missing.
Thanks!
Shane Williford
Systems Architect

Veeam Legend | Veeam Architect (VMCA) | VUG KC Leader
VMware VCAP/VCP | VMware vExpert 2011-22
Twitter: @coolsport00
JaySt
Service Provider
Posts: 415
Liked: 75 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: Specifically How to Configure Ubuntu Linux for BfSS or DirectSAN using iSCSI

Post by JaySt »

Hi Shane! Nice blog post about VHR and iSCSI btw ;) Alot of those steps apply to setting up DirectSAN/BfSS , don't you think?

Yes "snapshot" access is ok i think for Nimble when doing BfSS. for DirectSAN, "Volume Only" is ok, as it needs to read the exact LUN and not the snapshot.
Yes you need to do a login as well when using DirectSAN. Also some auto-login (after boot) offcourse.

And yes, it all depends on your requirements and infrastructure capabilities when choosing between data transport (hotadd, directsan, bfss).
From my own experience, I love the simplicity/reliability/flexibility of hotadd (in envinroments sized like the one you described) to be honest.
So if your environment is OK in handling open snapshots for while, backup process will be fine with hotadd and at the same time you dont have to worry about all things configuring SAN (and risk doing things wrong there... especially in case of DirectSAN).
Also, bandwith availability between repositories and the hotadd proxies is an important consideration as well. It's not always a path that's suitable for high traffic load.. (think routed paths, firewalls etc.).

BfSS has offcourse some bigger advantages when looking at all things that can be done with that, when implemented properly.
BfSS can benefit from Veeam's own path usage optimization when reading the snapshots, as it reads the snapshot of the LUN and Veeam uses MPIO devices (dm-devices) for that instead of VDDK reading the sdx devices in case of DirectSAN. This is what i got from the previous topic. That's probably what Hannes meant with "not really supported"; just the fact that VDDK does not do it efficiently.

Of all data transport options, DirectSAN is probably the one i least prefer nowadays.

Perhaps you've got some specific parts of the iSCSI configuration you're looking for? When looking at your latest blog, you got a lot of things covered, what do you think is missing?
Veeam Certified Engineer
coolsport00
Veeam Legend
Posts: 81
Liked: 14 times
Joined: Sep 11, 2012 12:00 pm
Full Name: Shane Williford
Location: Missouri, USA
Contact:

Re: Specifically How to Configure Ubuntu Linux for BfSS or DirectSAN using iSCSI

Post by coolsport00 » 1 person likes this post

Hi Jay -
Hey...thanks for the reply and thanks for reading my post! :)

Yep, pretty much (almost) all those steps for Repo also apply to Proxy as well, minus the need to partition and format the Volume (I won't share my 1-time horror story on that on my Windows Repo/Proxy combo box). 😂

I think the main things are the couple 'little' last configs required for Proxies, which you answered (thank you) and what I thought was needed -> Volume access on the array; then also needing to do the 'login' operation within Linux. There isn't anything on the Web on those 2 things...at least that I could find. I understand arrays are different, but however configuring "Volume access" with other storage vendors as it is on Nimble could be said. That's a high-level statement. How to specifically do so is dependent on the vendor of course. I just wish Veeam had some kind of documentation on it, and also DirectSAN for Windows. One of the reasons I created that Repo blog was because there isn't really anything "free-flowing" and detailed ;) I think most of what's needed is in the Guide, but is spread out in 3-4 different places, whether it be Repo or Proxy. Would be nice for Veeam to have a more detailed and again..a 'free-flowing' type Guide. But then again...maybe that's what us in the Community are for...to come up with posts just like that. hahaha And, I just lied kinda...there are a couple nice blogs on configuring the VHR from start to finish. Hannes actually wrote 1 of them on the Veeam blog site. Though, those specific array/iscsi configs aren't mentioned. Yes...also configuring the fstab file for continued storage mounting after reboots makes sense for DirectSAN as well.

I think I'll probably stay with BfSS because 2 of my jobs run every 30min. BfSS allows me to do that as it takes a real quick snap so the Volume can then be snapped so Veeam can back up my VMs from that Volume snap. Hotadd might interrupt my environment a bit longer than I'd like there...thanks for the reminder. I do use hotadd for Replication jobs, as well as a for immutable backups I run overnight to a server I'm using which is locally populated with disks. Hotadd tends to be fine for those. At times, with hotadd I can get anywhere from 1GB-3GB on a Replication job or 2.

Though DirectSAN does work for Linux Proxies, with multipathing not really working, is using DirectSAN really something folks would choose to do? i.e. SPOF. Agreed with what was said in the other post there that some wording really needs to be added in the Guide.

Thanks again for all the input Jay.
Shane Williford
Systems Architect

Veeam Legend | Veeam Architect (VMCA) | VUG KC Leader
VMware VCAP/VCP | VMware vExpert 2011-22
Twitter: @coolsport00
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 50 guests