Agent-based backup of Windows, Linux, Max, AIX and Solaris machines.
Post Reply
dhayes16
Service Provider
Posts: 192
Liked: 21 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

V11 and Linux

Post by dhayes16 »

So we do alot with Linux and the secure Linux immutablity solution looks to be a game changer for us. Each one of our customer sites has a properly sized windows server with refs and running as a bdr that handles the backup and replication function but also sends data off site to our cloud connect partner. We are also testing immutablity via s3 object lock as well with object storage.

So we are trying to understand how the Linux option might fit into this scenario. I assume we would still need the windows servers for the Dr portion (spinning up instant vms, etc) but we are trying to understand how to leverage this new Linux functionality. Would we have another device on prem running this secure Linux solution with xfs storage? Or could we move from a windows server to pure linux? But then we can not spin up vms for dr? We thought about wiping the on prem bdr and loading up Linux as the host os and formatting for xfs and virtualizing the windows host on the Linux box for DR. Just bouncing ideas around. We are excited to see this Linux secure solution in v11 and want to find out how to leverage it.
HannesK
Product Manager
Posts: 14840
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V11 and Linux

Post by HannesK »

Hello,

the hardened repository in V11 runs on Linux, yes. Backup server and other roles are still running on Windows. The hardened repository only runs the repository role. You cannot install the proxy role on a Linux repository, as this one requires Veeam data mover running with root privileges.

The idea of the hardened repository is, that this server only runs the Veeam repository. So a very small attack footprint. Running tons of services on it (like virtualization) would be the opposite of the idea :-)

Best regards,
Hannes
dhayes16
Service Provider
Posts: 192
Liked: 21 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Re: V11 and Linux

Post by dhayes16 »

Thanks for the feedback. I appreciate it very much
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V11 and Linux

Post by Gostev »

Reposting @tsightler's thoughts on this from the internal forums:

As I know this is going to be an issue for customers looking for an all-in-one appliance based approach I've been testing a few things around this. The easiest and lightest weight solution, with the best performance, is to just run an LXC container on the hardened repo. An LXC container is a little bit of a cross between a VM and a container, it's very lightweight, uses process isolation, etc, but also very easy to use, just setup a bridge network, fire up an Ubuntu image (a single command), and configure SSH in the container like normal. The container is isolated from the host so you can even give a user full root access as needed with no compromise of the host security.

So far I've only tested NBD/NBDSSL mode, but I've not been able to find any issues with this method. Performance is compariable to running a separate, VM based NBD proxy and I suspect this may be the only method that works if I want to keep it as secure as possible but, regardless, that could be OK in many cases. A physical, hardened Linux repo running an LXC based linux proxy seems as secure as you can get while combining both features on the same hardware, but, in the past, there have been examples of container escapes, typically due to kernel bugs or issues within the container management framework, so it's not a 100% secure wall between the two (is it ever really?).

Another option I've looked at is to simply use KVM to run a fully isolated VM on the Linux repo. This has more upfront setup since you have to install an OS first, but it's still pretty easy overall and provides a higher level of isolation from the underlying OS. In this scenario NFS and iSCSI based direct SAN should definitely work, and you could probably even get FC to work via PCI passthrough (technically this is possible with LXC as well), but I don't have hardware to attempt this configuration.
dhayes16
Service Provider
Posts: 192
Liked: 21 times
Joined: Feb 12, 2019 2:31 pm
Full Name: Dave Hayes
Contact:

Re: V11 and Linux

Post by dhayes16 »

Thank you. This was really what we were hoping to accomplish with an appliance type setup. I was thinking of installing Ubuntu on the host, spinning up kvm on that host, installing a windows server vm to to run under kvm to handle b&r and Dr functionality and then setting up Veeam secure Linux on the host and exposing it to the windows vm. I know it is asking a lot from the host hardware and likely outside the scope of veeam support. But it would be cool to have a nice self contained appliance that fully supports immutability on prem. Ideally it would be even better if b&r ran natively on Linux and we could spin up kvm or xen based vms for DR and save on the Microsoft licensing but that is likely not in the cards.
Thanks for your insight.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V11 and Linux

Post by tsightler » 2 people like this post

Just to clarify the above as I realized I had a little bit of incorrect info in the above post, I'm actually currently testing an LXC/LXD VM vs an LXC container. Basically, install hardened repo on Ubuntu, then use LXC to create an Ubuntu VM for the proxy. This is working very well. Technically you can make a Windows image for LXC/LXD VM as well, but it seems a little too involved to be a primary method. However, I'm sure a Windows VM running on KVM would work just fine, although I'm currently testing Linux proxies since you can always just use the Veeam server itself for the mount/vPower NFS server.

I'm finding that using an LXC/LXD VM with Ubuntu is working very well as a proxy, and all transport modes seem to work fine. I'm planning to write it up over the coming weeks.

I've been thinking about combining the use of LXC VMs even for the repo to provide even more security since it integrates nicely with ZFS snapshots, but I haven't started testing this yet. The idea is that you would install Ubuntu, but then not install any Veeam components directly on the host at all, but rather configure a ZFS pool and LXC to use this pool, then fire up to Ubuntu VMs, one for the repo, one for the proxy, and use LXC snapshots to protect the repo for another layer of protection. A hardened Linux repo using XFS block clone running on top of a ZFS Pool protected by ZFS snapshots on a host which running no Veeam components at all seems like a pretty solid solution to me. Just an idea right now, but I'm starting to build it in the lab for my own testing.
jeremybsmith
Service Provider
Posts: 4
Liked: 2 times
Joined: Feb 25, 2021 10:38 pm
Full Name: Jeremy B. Smith
Contact:

Re: V11 and Linux

Post by jeremybsmith »

Just to clarify the above as I realized I had a little bit of incorrect info in the above post, I'm actually currently testing an LXC/LXD VM vs an LXC container. Basically, install hardened repo on Ubuntu, then use LXC to create an Ubuntu VM for the proxy. This is working very well...
My team has been using a regular LXD container as a repo (with a ZFS ZVOL on the bare metal formatted with XFS [for reflink support] & mounted within the container) for a few months now.

Ideally, ZFS will soon get reflink support (see https://github.com/openzfs/zfs/issues/405 ) and negate the need for using the additional layer of a ZVOL + XFS just for fast clone support, but other than that it has been working well (although there may be room for performance tweaking to work optimally with Veeam), so I'd be interested in hearing your experiences/recommendations with that sort of setup.
Snapshots of the XFS ZVOL are great and all, but not nearly as useful as they would be on native ZFS where we could take advantage of easily accessing the snapshots (and sharing them via Samba, for example).

We based our setup on your Docker example (https://github.com/VeeamHub/veeam-docker), modified for LXD, so if there's any changes to how you'd recommend setting up the ZFS ZVOL/XFS/etc for optimal use with Veeam, I'd be interesting in seeing that too.

Thanks!
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus » 1 person likes this post

tsightler wrote: Dec 18, 2020 11:08 pm Just to clarify the above as I realized I had a little bit of incorrect info in the above post, I'm actually currently testing an LXC/LXD VM vs an LXC container. Basically, install hardened repo on Ubuntu, then use LXC to create an Ubuntu VM for the proxy. This is working very well. Technically you can make a Windows image for LXC/LXD VM as well, but it seems a little too involved to be a primary method. However, I'm sure a Windows VM running on KVM would work just fine, although I'm currently testing Linux proxies since you can always just use the Veeam server itself for the mount/vPower NFS server.

I'm finding that using an LXC/LXD VM with Ubuntu is working very well as a proxy, and all transport modes seem to work fine. I'm planning to write it up over the coming weeks.

I've been thinking about combining the use of LXC VMs even for the repo to provide even more security since it integrates nicely with ZFS snapshots, but I haven't started testing this yet. The idea is that you would install Ubuntu, but then not install any Veeam components directly on the host at all, but rather configure a ZFS pool and LXC to use this pool, then fire up to Ubuntu VMs, one for the repo, one for the proxy, and use LXC snapshots to protect the repo for another layer of protection. A hardened Linux repo using XFS block clone running on top of a ZFS Pool protected by ZFS snapshots on a host which running no Veeam components at all seems like a pretty solid solution to me. Just an idea right now, but I'm starting to build it in the lab for my own testing.
Very interested in this work.

We're currently evaluating several on premise air-gapping solutions, which we'd like to roll out quite soon.
We've been awaiting further details on the Veeam hardened repository and it has several interesting features over other solutions.
Firstly, having the primary Backup Jobs as Immutable, would save massively on additional storage. I can see benefits in having the first copy of the data protected too, as in the event of an ransomware attack - recent Backup Copies of affected primary backups could also be damaged.

We have four physical proxy/repository servers, each with over 50TB of direct attached storage, but the limitation of having no proxy services on them would reduce our backup proxy capabilities by 80%.
So having the ability to run the proxy in a container on the same server - would be ideal, and create a perfect air-gap solution for our infrastructure.

Would the container have access to the direct storage, or can that be prevented?
Could a proxy role installed in a container, be supported by Veeam? Presumably from a technical perspective it's just similar to being installed within a VM.

I haven't done much with ZFS, although that sounds interesting too.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus »

tsightler wrote: Dec 18, 2020 11:08 pm I'm actually currently testing an LXC/LXD VM vs an LXC container.
I'm finding that using an LXC/LXD VM with Ubuntu is working very well as a proxy, and all transport modes seem to work fine. I'm planning to write it up over the coming weeks.

Was there a particular reason for choosing an LXD VM instead of a Container, for your testing?
Perhaps for security/isolation from the host system?

I'd have expected easier hardware access for the Transport modes with a Container.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V11 and Linux

Post by tsightler »

Because it was the easiest way to setup something that supported almost all of the transport modes (NBD, DirectNFS, Direct SAN and Backup from Storage Snapshot, the latter two for iSCSI only), without any special configuration of the host.

Veeam requires root privledges for the proxy, this isn't too bad with the LXC container since the container by default runs with privileged mode = false so processes running as root in the container do not run as root on the host. I believe this would work with NBD mode, but unlikely with any other mode.

Other modes require more low level permissions to the underlying block devices, Direct NFS needs to be able to mount/unmount NFS shares, BfSS requires scanning iSCSI, attaching and detaching LUNs, Direct SAN would need access to the raw block devices attached to the host or via some iSCSI persistent connection. I'm not even sure iSCSI is possible, even at this point, with an LXC container, I know there are some ways to get it to work with Docker, but with a lot of limitations. Mounting NFS is certainly possible, but I believe requires privileged mode = true and for NFS mount to be explicitly enabled for the container. Not catastrophic, but as the idea is to be as isolated as possible from the underlying host, didn't leave me with a good feeling.

After thinking about all of that, simply using a VM instead of a container seemed far easier and safer but, most importantly, supportable, and being a supportable solution was a critical factor for me. Note that I'm by far not an LXC expert, this was really my first foray into it, so feel free to prove me wrong on any of the above points, it won't hurt my feelings one bit.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus »

No problems. This is (almost) my first deployment of LXC/LXD, so I'm still unsure of the benefits of each.
If this works, we'd be very interested in replacing our current Windows Proxy/Repo physical servers with it - but initially I have an existing Linux repository at our secondary DR site for testing.
Hope to have some results to report soon.

One question - do you know if FC Direct SAN is an option as a transport mode?
We can deploy any number of UCS NICs and HBAs to the physical servers, if the current ones can't be shared - but I have no experience of passing them through to a VM.

Direct SAN over FC still seems to beat our 10GbE LAN for performance, (not sure if the new NBD multithreading feature could tip the scales towards NBD, however).
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V11 and Linux

Post by Gostev »

Yes, Direct SAN over FC is supported by Linux backup proxy.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V11 and Linux

Post by tsightler »

It's unlikely NBD multithreading will make the difference, NBD still has many limitations due to the sheer number of paths the data must pass through both at the physical and logical layers.

And yes, I said iSCSI because that doesn't require any complex PCI passthrough setup, etc, those modes "just work". I tried to do some passthrough with LXC but I had pretty limited success. 99% of the documentation was about GPU passthrough, and my attempts to passthrough other adapters failed, but I've since done more research and I think I was not getting the kernel parameters exactly correct (there are several ways to identify kernel parameters for vfio passthrough). Unfortunately I don't have access to a good test system anymore, but I believe that it should work so I'll be interested in your results.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus »

Had some time to test this. I've had good success with a simple LXD VM, over NBD.
The performance is reasonable over the 10GBe, but not sure how far it would scale at our main site.
I do wonder how much better the performance of a simple LXD Container would be, without the VM virtualization overhead - for standard NBD.

I haven't managed to find anything in official documentation about FC support in LXD Containers/VMs. Even searching QEMU articles outside of LXD, gives very light results.
Perhaps there's more in Docker, but there just doesn't seem to be enough to give confidence in trying it in production.
I reached out to a developer, and his best recommendation (for a container) was to mount them on the host, and pass through the disks in the container config.
This, I believe would be incompatible with Direct SAN in Veeam, and would be more like backing up local disks rather than SAN LUNs.

That leaves KVM.

There's a fair bit of documentation around PCI/HBA passthrough, or NPIV/vHBA presentation in KVM and libvirt. It appears to be commonly used, supported and gives better performance than LXD VMs.
We use Cisco UCS in the datacentres, so there seems little point in NPIV, when we could just create and passthrough additional HBAs (similar to additional NICs instead of bridges).

I'll test with that, but it seems the choice might be between using a simple NBD deployment in a Container, or KVM with PCI passthrough for Direct SAN access.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus »

One question that occurred to me, was regarding the presentation of LUNs to the Linux proxy.
When we first set up Veeam several years ago, there were concerns about risks of data loss from the proxy OS writing to the VMware LUNs.
Our SAN doesn't have the facility to map LUNs to a host read-only, but there was logic in Veeam to correct this, and there were recommendations for DiskPart commands to prevent accidental writes.

I haven't had to consider this for many years, but with Linux Direct SAN access now available - I haven't seen any equivalent discussion or documentation.
Are there any equivalent recommendations for the presentation of VMware disks to the proxy, or is this again fixed within the Veeam services?
HannesK
Product Manager
Posts: 14840
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V11 and Linux

Post by HannesK »

Linux never had "automount" (also Windows disabled it many years ago). That's probably why nobody talked about that for Linux.

Veeam (or any other backup software vendor) cannot prevent that administrators (or root in case of Linux) mounting / formatting a volume.
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: V11 and Linux

Post by ferrus » 1 person likes this post

After 6 weeks of testing, I finally have a working test system.

There seemed to be little chance of getting FC to work with LXD containers, and little documentation with LXD VMs, so I concentrated on native KVM VMs.

Our infrastructure is based on Cisco UCS so I used the recommended VM-FEX integration into KVM, where the NIC/HBA hardware can be passed directly through to the proxy VM guest without any access in the host - for extra security.
After many issues with MTU, XML syntax and using the correct PCI address - this finally worked over NBD.

Presenting the vHBAs and Fibre Channel LUNs to the VM proxy was comparatively easier - but there's a bug in the kernel packaged FNIC driver that would crash the Linux proxy after a short amount of usage.
Bizarrely, Cisco provide Ubuntu drivers for the ENIC, and SNIC cards, but only RHEL/CentOS and SLES for FNIC. So I had to remove the recommended Ubuntu 20.04 installation, and reinstall with RHEL8.3.

After updating the FNIC drivers in both the host and guest, I've managed to take a full set of Immutable XFS backups on the host Linux repo, via the isolated KVM VM Linux proxy, over FC Direct Storage Access.

From our perspective, this is the best solution for our production system. It allows the initial daily backups taken to DAS storage - to be the protected immutable backups, and subsequent archive and offsite Backup Copies taken from those.
In the event of a ransomware attack, the immutable backups would also be hosted on the quickest and closest platform - for a restore.

Does this sound like a supportable design from a Veeam perspective? (can't see anywhere it breaks Veeam requirements - as Linux proxies started as VM only)
I plan to keep one hardware Windows based Veeam proxy, and perhaps another VM. Is there a list of features that are still outstanding for Linux proxies?
HannesK
Product Manager
Posts: 14840
Liked: 3086 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V11 and Linux

Post by HannesK »

Hello,
thanks for sharing your experience with the community.

In the end, Veeam uses what the operating system presents to the software. If that works stable, then the design is good (I remember VM-FEX from 6 years ago or so :D). Sure, we don't test such scenarios, but if the hardware vendors have such solutions, then it's fine for us.
Is there a list of features that are still outstanding for Linux proxies?
Backup from Storage Snapshots from NFS doesn't hit you, so the functionality in general is the same (assuming that you only talk about VMware backup proxy and not about the general proxy role that can also back up NAS and Windows Agents from storage snapshots. That "Backup Proxy" role is only available on Windows today)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: No registered users and 14 guests