-
- Enthusiast
- Posts: 30
- Liked: 2 times
- Joined: Oct 02, 2019 5:52 pm
- Full Name: Al
- Location: Minnesota
- Contact:
Linux Hardened Repository block size
I am new to Linux and am working on setting up a Linux Hardened Repository to move my backups to. I have Ubuntu 22.04 installed and am at the point of where I want to setup the data volume where the backup repository itself will be. In the past with a Windows repository, I would format the volume with ReFS and 64K for the block size. Is it correct to want to use XFS and format it 64K? If so, what are the correct commands to do this?
This is what the disk looks like currently:
Model: HPE LOGICAL VOLUME (scsi)
Disk /dev/sdb: 101TB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
This is what the disk looks like currently:
Model: HPE LOGICAL VOLUME (scsi)
Disk /dev/sdb: 101TB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
-
- Product Manager
- Posts: 10278
- Liked: 2746 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Linux Hardened Repository block size
Hi Buffalo
Use 4KB as documented in our userguide:
https://helpcenter.veeam.com/docs/backu ... positories
Best,
Fabian
Use 4KB as documented in our userguide:
https://helpcenter.veeam.com/docs/backu ... positories
Code: Select all
mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sda1
Fabian
Product Management Analyst @ Veeam Software
-
- Veeam Software
- Posts: 164
- Liked: 38 times
- Joined: Jul 28, 2022 12:57 pm
- Contact:
Re: Linux Hardened Repository block size
Hello,
4k is suitable for most of usage, i would only go on 64k if you have a specific workoad on this repos like huge server (10TB..).
4k is suitable for most of usage, i would only go on 64k if you have a specific workoad on this repos like huge server (10TB..).
Bertrand / TAM EMEA
-
- Product Manager
- Posts: 10278
- Liked: 2746 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Linux Hardened Repository block size
We don't support 64k. 4k is the maximum.
Best,
Fabian
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Product Manager
- Posts: 10278
- Liked: 2746 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Linux Hardened Repository block size
A short update:
We will check the 4k limitation from our user guide with QA and report back.
Best,
Fabian
We will check the 4k limitation from our user guide with QA and report back.
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Service Provider
- Posts: 130
- Liked: 27 times
- Joined: Apr 01, 2016 5:36 pm
- Full Name: Olivier
- Contact:
Re: Linux Hardened Repository block size
Hello,
mkfs.xfs command's manual page gives us a hint, particularly the options related to block size
The term "pagesize" often referred to as "PAGE_SIZE," is integral to the system's memory management. It is the size of a single memory page, and the standard size on x86 architecture is 4 KiB. You can check the system's page size by using the command :
Changing this value beyond will fail to mount your filesystem.
Hope it helps.
Oli
mkfs.xfs command's manual page gives us a hint, particularly the options related to block size
-b block_size_options
This option specifies the fundamental block size of the filesystem. The valid block_size_options are: log=value or size=value, and only one can be supplied. The block size is specified either as a base two logarithm value with log=, or in bytes with size=. The default value is 4096 bytes (4 KiB), the minimum is 512, and the maximum is 65536 (64 KiB). XFS on Linux currently only supports pagesize or smaller blocks.
The term "pagesize" often referred to as "PAGE_SIZE," is integral to the system's memory management. It is the size of a single memory page, and the standard size on x86 architecture is 4 KiB. You can check the system's page size by using the command :
Code: Select all
getconf PAGE_SIZE
Hope it helps.
Oli
-
- Product Manager
- Posts: 10278
- Liked: 2746 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Linux Hardened Repository block size
Hi Oli
Thank you for providing an explanation.
I also got confirmation last week from our QA team. The maximum supported block size is 4K for XFS filesystem.
Best,
Fabian
Thank you for providing an explanation.
I also got confirmation last week from our QA team. The maximum supported block size is 4K for XFS filesystem.
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 30
- Liked: 2 times
- Joined: Oct 02, 2019 5:52 pm
- Full Name: Al
- Location: Minnesota
- Contact:
Re: Linux Hardened Repository block size
Thanks for extended analysis and details!
-
- Novice
- Posts: 6
- Liked: never
- Joined: Jun 09, 2016 1:45 am
- Full Name: Josh
Re: Linux Hardened Repository block size
I'm just setting this up for the first time too. Can anyone please check the config and give some feedback? Thank you in advance.


-
- Influencer
- Posts: 17
- Liked: 3 times
- Joined: Jan 25, 2019 2:35 pm
- Contact:
[MERGED] XFS deduplication tunning (duperemove)
Hello,
I'm looking to run duperemove on a XFS repository created with fast clone (mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sda1) like explain here : https://helpcenter.veeam.com/docs/backu ... ml?ver=120
I have some doubts about the block size value.
duperemove -hdr -b 512K --hashfile=/root/SDA1COPY-files.hash /SDA1COPY
I accordance with some research 512K should be a good value. Any guru to confirm?
Thank you for sharing your knowledge.
Regards,
Eric
I'm looking to run duperemove on a XFS repository created with fast clone (mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sda1) like explain here : https://helpcenter.veeam.com/docs/backu ... ml?ver=120
I have some doubts about the block size value.
duperemove -hdr -b 512K --hashfile=/root/SDA1COPY-files.hash /SDA1COPY
I accordance with some research 512K should be a good value. Any guru to confirm?
Thank you for sharing your knowledge.
Regards,
Eric
Leading Technology
-
- Product Manager
- Posts: 10278
- Liked: 2746 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Linux Hardened Repository block size
Hi Eric
I moved your topic to a similar one.
Maximum supported block size is 4KB. You can't go higher (assuming you meant 512KB).
Best,
Fabian
I moved your topic to a similar one.
Maximum supported block size is 4KB. You can't go higher (assuming you meant 512KB).
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Influencer
- Posts: 17
- Liked: 3 times
- Joined: Jan 25, 2019 2:35 pm
- Contact:
Re: Linux Hardened Repository block size
Hello Mildur,
Thank you for your reply.
In accordance with this post: post361622.html I have observed very good performance with 512 KB (due to "low" fragmentation).
I'm also evaluating VDO compression and deduplication. I will post result soon.
Thank you for your reply.
In accordance with this post: post361622.html I have observed very good performance with 512 KB (due to "low" fragmentation).
I'm also evaluating VDO compression and deduplication. I will post result soon.
Leading Technology
-
- Veeam Software
- Posts: 6188
- Liked: 1978 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Linux Hardened Repository block size
I had some feedback in regards to VDO from a customer, who tested it last summer. Maybe things have changed, but at that time they decided to move on without using it because, indeed space savings were great, but they came at the expense of a significant operations overhead.
Quoting them anonymously: "you have to calculate the virtual size of the repo yourself at the point of creation based upon best guess as to the dedupe ratio you expect to see. In their testing this then meant there were scenarios where you could fill the filesystem whilst the underlying VDO disk had space remaining, or filling the VDO whilst the filesystem reported free space available. As the expected dedupe ratio could change as new and different VMs are added, it would mean re-calculating the ratio frequently. The process to then change the logical volume size then needed to be carried out, whilst increasing filesystem sizes could be done, reducing them based on a smaller dedupe ratio couldn’t be done simply. So we decided VDO would have too much of a management overhead in a changing production environment to be useful."
Again, things may have changed in these last 6 months, but better be safe.
Quoting them anonymously: "you have to calculate the virtual size of the repo yourself at the point of creation based upon best guess as to the dedupe ratio you expect to see. In their testing this then meant there were scenarios where you could fill the filesystem whilst the underlying VDO disk had space remaining, or filling the VDO whilst the filesystem reported free space available. As the expected dedupe ratio could change as new and different VMs are added, it would mean re-calculating the ratio frequently. The process to then change the logical volume size then needed to be carried out, whilst increasing filesystem sizes could be done, reducing them based on a smaller dedupe ratio couldn’t be done simply. So we decided VDO would have too much of a management overhead in a changing production environment to be useful."
Again, things may have changed in these last 6 months, but better be safe.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 17
- Liked: 3 times
- Joined: Jan 25, 2019 2:35 pm
- Contact:
Re: Linux Hardened Repository block size
I got good deduplication ratio with this parameters:
duperemove -drh -b 64K --hashfile=/SSD/COPY-1-b64.hash --dedupe-options=partial /COPY-1/
I'm looking for very long term retention so I privileged deduplication than restoration speed. If you need to keep good restoration speed, use "-b 512K" as explain above.
If immutability is set, you have to remove it before: "chattr -RV -i /COPY-1/"
I'm look for a script to restore immutability (immutability settings are stored in a hidden file .veeam.xxx.lock stored in job's directory.
duperemove -drh -b 64K --hashfile=/SSD/COPY-1-b64.hash --dedupe-options=partial /COPY-1/
I'm looking for very long term retention so I privileged deduplication than restoration speed. If you need to keep good restoration speed, use "-b 512K" as explain above.
If immutability is set, you have to remove it before: "chattr -RV -i /COPY-1/"
I'm look for a script to restore immutability (immutability settings are stored in a hidden file .veeam.xxx.lock stored in job's directory.
Leading Technology
-
- Influencer
- Posts: 17
- Liked: 3 times
- Joined: Jan 25, 2019 2:35 pm
- Contact:
Re: Linux Hardened Repository block size
Quick feedback after a few months
VDO offers better deduplication ratio than dumperemove.
Ratio will depend on your environment but for mixed lab (hundreds TB, mixed data, ratio is about 2x for VDO vs duperemove).
Hope it helps
VDO offers better deduplication ratio than dumperemove.
Ratio will depend on your environment but for mixed lab (hundreds TB, mixed data, ratio is about 2x for VDO vs duperemove).
Hope it helps
Leading Technology
Who is online
Users browsing this forum: Baidu [Spider], Semrush [Bot] and 91 guests