Host-based backup of VMware vSphere VMs.
Post Reply
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

NVM controller support for Veeam HotAdd proxies

Post by DaveWatkins »

Is there any intention to add NVM controller support for Veeam HotAdd proxies?

It seems as soon as you remove the SCSI controller the proxies can no longer add the disks to the proxy VM and so failover to network mode. With VMware suggesting these show less CPU usage than the paravirtual SCSI controller in flash arrays it seems like something that would be useful for a backup proxy potentially moving quite a lot of data.
DonZoomik
Service Provider
Posts: 368
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by DonZoomik »

VMware's NVMe controller has an old specification version that doesn't support namespace hot-add nor size change. You can add disks and change their sizes but VM doesn't notice it until restart. Linux kernels 4.9+ (I think, or was it 4.14...) have manual rescan option but on Windows you're just stuck until restart.
Also IMHO CPU differences are minuscule. For testing I configured 2 VMs with same loadbalanced workload, one with PVSCSI and other with NVMe. There was a difference but on the order of 0.2%. Granted, the workload was not very IO intensive and backend AFA SAN is still (i)SCSI based but don't expect too much.
Andreas Neufert
VP, Product Management
Posts: 6747
Liked: 1408 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by Andreas Neufert »

In general you should use the same SCSI controler within the HotAdd proxies as within the original VMs.
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by DaveWatkins »

DonZoomik wrote: Jan 22, 2020 8:50 am VMware's NVMe controller has an old specification version that doesn't support namespace hot-add nor size change. You can add disks and change their sizes but VM doesn't notice it until restart. Linux kernels 4.9+ (I think, or was it 4.14...) have manual rescan option but on Windows you're just stuck until restart.
For reference

https://kb.vmware.com/s/article/2147574

which seems like quite the gaping hole. A _lot_ of VM operations are disk expansions in mature systems in my experience. That ends this little experiment then.
HannesK
Product Manager
Posts: 14314
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by HannesK »

yeah, I also removed most NVMe controllers from my lab because I have mostly Windows... but back to the point of HotAdd... unexpectedly it worked for me (with V10 beta)

Windows 2019 with only one NVMe controller installed.

also agree, I don't believe that it has any significant performance impact for backup. If I had to move much data, then I personally prefer direct SAN backup anyway :-)
DaveWatkins
Veteran
Posts: 370
Liked: 97 times
Joined: Dec 13, 2015 11:33 pm
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by DaveWatkins »

It doesn't appear to work in 9.5, so it's been updated in v10 seemingly. But given the limitations (why would VMware even release it like that?!) it's not something we'll ever use
DonZoomik
Service Provider
Posts: 368
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: NVM controller support for Veeam HotAdd proxies

Post by DonZoomik »

Actually I have a cluster using basically exclusively NVMe based VMs, it was a migration project from ProxMox and VM sysadmins decided to go bleeding-edge when migrating - just when 6.7 had been released and NVMe+UNMAP became a thing. It's Linux-only so you can work around the problems - as I said, you can force newer kernels to rescan controllers but it's not intuitive at all.
However Linux NVMe driver is quite bad, especially when combined with UNMAP (PVSCSI and UNMAP have their own can of worms but let's not go there...) and we've hit many bugs. Nothing catastropic but occasional database crash, kernel panic or system freeze/deadlock due to kernel bug is just annoying. It's not *that* bad to justify reconfiguring every VM.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 55 guests