Hi I'm reading now since hours how to build a repo with XFS. I'm really happy that Veeam is going to integrate Linux in his product!
The VBPs are not so detailed with XFS.
So it would help also others if you can build a small Best Practise for XFS Deployment.
We will use v11 on a hardware machine with RAID Controller.
What OS do you recommend?.
We normal use Debian 10 but I saw some infos, that Ubuntu 20 should be faster because of a newer Kernel with perfomance optimziation for XFS?
If Ubuntu is faster then I'll take it. (I don't wanna use backported Kernel on Debian)
Shall I use also 256kb block size on the RAID Drive as suggested in combination with ReFS?
The Blocksize of XFS is documtented as to use 4kb (Mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sda)
Shall I use XFS on top of LVM?
Hope I got all quesitions which are needed
I saw also the webinar with Docker, ZFS and XFS. But that's for me way too much techniques in a backup solution
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: May 21, 2015 9:01 am
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v11 and XFS Best Practise
You can use any distribution you are comfortable with, although it is highly recommended to use one of the distributions on the supported list as all other distros are supported in experimental status only, meaning they haven't been specifically tested by our QA. Basically, best practice = use the supported, QA tested distro that you are the most experienced and comfortable with, but remember that support for OS issues comes from the OS vendor, so if you want to have someone to call if you have problems, you should use a distro with available support.
While it's true that Ubuntu 20.04 uses 5.4 kernel which has some additional optimizations around XFS, testing to this point has shown only minimal difference for most real-world Veeam use cases so, unless you are planning to do something extreme, like keeping a large number of synthetic GFS points, there's probably not much difference overall and both will likely meet your performance requirements.
The Linux implementation of XFS is limited to a maximum of 4K for the block size (the XFS filesystems on-disk structure technically allows larger block sizes, but the current XFS implementation on Linux limits maximum block size to the system architecture page size, which is 4K for X86/64 architectures). This should be no issue as several vendors state explicit support for filesystems up to 1PB in size using this block size and performance seems fine even with the smaller blocks and this does lead to higher space efficiency vs larger blocks. However, the 4K block size of the filesystem has no impact on RAID stripe size recommendation for use with Veeam backups. Veeam still writes in large blocks, the filesystem block size is just how granularly the filesystem tracks block allocation, nothing more.
Using LVM is also completely up to you as the admin as it works fine either way. From Veeam there's no specific best practice here beyond the normal Linux best practice. If you have a need to manage expansion of disks via LVM you should use LVM, if you have a storage system that can expand existing LUNs and has no specific limitations, then you probably don't really need LVM. Personally, I always use LVM regardless mostly because I'm super comfortable with it, it adds almost no overhead, and if you use LVM and never need it, pretty much nothing is lost, but if you need LVM but didn't use it to start, well, things are not quite so easy.
There are several terrific community blog posts regarding setting up a Linux hardened repo, I would read these and adapt to your use case.
https://www.starwindsoftware.com/blog/v ... ory-part-1
https://www.jdwallace.com/post/hardened-linux-repo
While it's true that Ubuntu 20.04 uses 5.4 kernel which has some additional optimizations around XFS, testing to this point has shown only minimal difference for most real-world Veeam use cases so, unless you are planning to do something extreme, like keeping a large number of synthetic GFS points, there's probably not much difference overall and both will likely meet your performance requirements.
The Linux implementation of XFS is limited to a maximum of 4K for the block size (the XFS filesystems on-disk structure technically allows larger block sizes, but the current XFS implementation on Linux limits maximum block size to the system architecture page size, which is 4K for X86/64 architectures). This should be no issue as several vendors state explicit support for filesystems up to 1PB in size using this block size and performance seems fine even with the smaller blocks and this does lead to higher space efficiency vs larger blocks. However, the 4K block size of the filesystem has no impact on RAID stripe size recommendation for use with Veeam backups. Veeam still writes in large blocks, the filesystem block size is just how granularly the filesystem tracks block allocation, nothing more.
Using LVM is also completely up to you as the admin as it works fine either way. From Veeam there's no specific best practice here beyond the normal Linux best practice. If you have a need to manage expansion of disks via LVM you should use LVM, if you have a storage system that can expand existing LUNs and has no specific limitations, then you probably don't really need LVM. Personally, I always use LVM regardless mostly because I'm super comfortable with it, it adds almost no overhead, and if you use LVM and never need it, pretty much nothing is lost, but if you need LVM but didn't use it to start, well, things are not quite so easy.
There are several terrific community blog posts regarding setting up a Linux hardened repo, I would read these and adapt to your use case.
https://www.starwindsoftware.com/blog/v ... ory-part-1
https://www.jdwallace.com/post/hardened-linux-repo
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: May 21, 2015 9:01 am
Re: v11 and XFS Best Practise
Hi
Thanks a lot for your inputs!
Thanks a lot for your inputs!
Who is online
Users browsing this forum: No registered users and 81 guests