Interesting. I found:
https://linux-xfs.oss.sgi.narkive.com/z ... ent-per-tb
I did this on our ~490 TB filesystem, filled with 515 TB:
xfs_repair -n -vv -m 1 /dev/mapper/veeamxfs-xfslv
Phase 1 - find and verify superblock...
- reporting progress in intervals of 15 minutes
- max_mem = 1024, icount = 41728, imem = 163, dblock = 129018783744, dmem = 62997453
Required memory for repair is greater that the maximum specified
with the -m option. Please increase it to at least 61569.
Accoring to xfs.org:
The numbers reported by xfs_repair are the absolute minimum required and approximate at that; more RAM than this may be required to complete successfully. Also, if you only give xfs_repair the minimum required RAM, it will be slow; for best repair performance, the more RAM you can give it the better. "
So it needs 60 GB as an absolute minimum. The question is: Will that increase with more block cloned data or is it purely depending on how much % of the filesystem is used?
We were planning to use an old server with 128 GB for our 620 TB production repo with much more block cloning so if it is depending on block cloned data we need to increase that
We will test it without -m 1 now (so it really does the check).