Sorry to necro this @tsightler
Wondering if you had a recommendation on LVM specifically as to whether you would... The context is ISCSI luns to Hardened Veeam Repo.
1. Present ISCSI target use LVM at the Disk Level and expansions are handled by presenting more luns
2. Present ISCSI target use LVM at a partition level and expand luns and add partitions as needed.
I know there is no right answer I'm so on the fence and wondered if you have an opinion on it.
I'm leaning towards plan 2 as it technically can cover both scenarios.
Also limits of Luns both at a VMWare level and a SAN/DAS (is a direct attached iSCSI SAN a DAS haha) level.
Perhaps I have made my own mind up in writing this.
tsightler wrote: ↑Mar 09, 2021 4:15 pm You can use any distribution you are comfortable with, although it is highly recommended to use one of the distributions on the supported list as all other distros are supported in experimental status only, meaning they haven't been specifically tested by our QA. Basically, best practice = use the supported, QA tested distro that you are the most experienced and comfortable with, but remember that support for OS issues comes from the OS vendor, so if you want to have someone to call if you have problems, you should use a distro with available support.
While it's true that Ubuntu 20.04 uses 5.4 kernel which has some additional optimizations around XFS, testing to this point has shown only minimal difference for most real-world Veeam use cases so, unless you are planning to do something extreme, like keeping a large number of synthetic GFS points, there's probably not much difference overall and both will likely meet your performance requirements.
The Linux implementation of XFS is limited to a maximum of 4K for the block size (the XFS filesystems on-disk structure technically allows larger block sizes, but the current XFS implementation on Linux limits maximum block size to the system architecture page size, which is 4K for X86/64 architectures). This should be no issue as several vendors state explicit support for filesystems up to 1PB in size using this block size and performance seems fine even with the smaller blocks and this does lead to higher space efficiency vs larger blocks. However, the 4K block size of the filesystem has no impact on RAID stripe size recommendation for use with Veeam backups. Veeam still writes in large blocks, the filesystem block size is just how granularly the filesystem tracks block allocation, nothing more.
Using LVM is also completely up to you as the admin as it works fine either way. From Veeam there's no specific best practice here beyond the normal Linux best practice. If you have a need to manage expansion of disks via LVM you should use LVM, if you have a storage system that can expand existing LUNs and has no specific limitations, then you probably don't really need LVM. Personally, I always use LVM regardless mostly because I'm super comfortable with it, it adds almost no overhead, and if you use LVM and never need it, pretty much nothing is lost, but if you need LVM but didn't use it to start, well, things are not quite so easy.
There are several terrific community blog posts regarding setting up a Linux hardened repo, I would read these and adapt to your use case.
https://www.starwindsoftware.com/blog/v ... ory-part-1
https://www.jdwallace.com/post/hardened-linux-repo