-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
ReFS vs. XFS - a small real world synthetic performance comparison
Last week @tsightler asked me if i head any real world performance comparisons between ReFS and XFS. While our ReFS and XFS systems are not entirely identical the good thing is we have some backups which are on quite similar storage on the source and destination of a backup copy. So at least we can compare Synthetic performance!
What we have:
- One ReFS Repo Server with 5 ReFS volumes (about 200 – 280 TB per Volume)
- Multiple XFS repos, the biggest is one Server with 1 XFS volume with 843 TB
We checked the synthetic time for two kinds of backups:
Backup with most incremental changes (only 5 VM in this job):
ReFS Synthetic times (last 3 runs): 41:13, 54:44, 72:15
XFS Synthetic times (last 3 runs): 10:31, 11:36, 18:28
Backup with most elements (1831 VM in one Job):
ReFS Synthetic times 1831 VM: 222:12, 223:13, 197:37
XFS Synthetic times 1831 VM: 47:12, 48:27, 68:51
The second backup job uses ALL our ReFS backend Storages. ReFS can use ~320 Rotating Disks, XFS only ~160, so it is at a severe disadvantage here.
Looking at it like this XFS is extremely impressive! It gets more impressive when we look at the Server specs:
ReFS: Dell R7525 with the largest 2022 64 Core AMD CPU (2x AMD EPYC™ 7H12), 1 TB RAM, Windows 2022 (+ Veeam Server installed but this should not impact this system much)
XFS: Some old Poweredge R730 with 8 Cores total and 128 GB Ram, Ubuntu 20.04
For us XFS is the future for all block repositories - we never had any issue whatsoever!
Markus
What we have:
- One ReFS Repo Server with 5 ReFS volumes (about 200 – 280 TB per Volume)
- Multiple XFS repos, the biggest is one Server with 1 XFS volume with 843 TB
We checked the synthetic time for two kinds of backups:
Backup with most incremental changes (only 5 VM in this job):
ReFS Synthetic times (last 3 runs): 41:13, 54:44, 72:15
XFS Synthetic times (last 3 runs): 10:31, 11:36, 18:28
Backup with most elements (1831 VM in one Job):
ReFS Synthetic times 1831 VM: 222:12, 223:13, 197:37
XFS Synthetic times 1831 VM: 47:12, 48:27, 68:51
The second backup job uses ALL our ReFS backend Storages. ReFS can use ~320 Rotating Disks, XFS only ~160, so it is at a severe disadvantage here.
Looking at it like this XFS is extremely impressive! It gets more impressive when we look at the Server specs:
ReFS: Dell R7525 with the largest 2022 64 Core AMD CPU (2x AMD EPYC™ 7H12), 1 TB RAM, Windows 2022 (+ Veeam Server installed but this should not impact this system much)
XFS: Some old Poweredge R730 with 8 Cores total and 128 GB Ram, Ubuntu 20.04
For us XFS is the future for all block repositories - we never had any issue whatsoever!
Markus
-
- Chief Product Officer
- Posts: 31753
- Liked: 7258 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
XFS is doing pretty good for a 20th century file system that turns 25 years old next year... oh wait, may be that's the actual reason why?
-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
But isn't XFS reflink a rather new feature (2017 or so?).
-
- Chief Product Officer
- Posts: 31753
- Liked: 7258 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Indeed, however as both of us remember, almost all of the ReFS struggles were elsewhere: with block cloning merely triggering/highlighting various architectural issues and bugs, sometimes even outside ReFS. The worst issue that also took the longest to solve was actually with file system metadata handling by the OS, if I remember correctly.
So those 20 years of prior polish certainly helped XFS to adopt reflink quickly and painlessly. Maturity is practically an unfair advantage one can't buy... Veeam really enjoys the same benefits now: our code has seen it all in those 1M+ environments and is thus ready for pretty much anything!
So those 20 years of prior polish certainly helped XFS to adopt reflink quickly and painlessly. Maturity is practically an unfair advantage one can't buy... Veeam really enjoys the same benefits now: our code has seen it all in those 1M+ environments and is thus ready for pretty much anything!
-
- Novice
- Posts: 6
- Liked: never
- Joined: May 21, 2023 4:36 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Obviously newish Veeam customer here but is that a function of Veeam code or truly just the file system? Not a huge fan of Refs after the past few years but got the systems I got. Seem like Veeam way more concentrated on other arenas. I do understand the business case.
-
- Product Manager
- Posts: 9793
- Liked: 2585 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
It is an API/feature the filesystem provides.
Description for reFS:
https://learn.microsoft.com/en-us/windo ... ck-cloning
Veeam can leverage that API for different operations:
https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Best,
Fabian
Description for reFS:
https://learn.microsoft.com/en-us/windo ... ck-cloning
Veeam can leverage that API for different operations:
https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Best,
Fabian
Product Management Analyst @ Veeam Software
-
- Enthusiast
- Posts: 57
- Liked: 5 times
- Joined: Jun 25, 2018 3:41 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
How big are these 5 VM's? Your synthetic test seems very long for 5 VM's..mkretzer wrote: ↑May 18, 2023 2:07 pm
We checked the synthetic time for two kinds of backups:
Backup with most incremental changes (only 5 VM in this job):
ReFS Synthetic times (last 3 runs): 41:13, 54:44, 72:15
XFS Synthetic times (last 3 runs): 10:31, 11:36, 18:28
Backup with most elements (1831 VM in one Job):
ReFS Synthetic times 1831 VM: 222:12, 223:13, 197:37
XFS Synthetic times 1831 VM: 47:12, 48:27, 68:51
The second backup job uses ALL our ReFS backend Storages. ReFS can use ~320 Rotating Disks, XFS only ~160, so it is at a severe disadvantage here.
-
- Influencer
- Posts: 11
- Liked: 4 times
- Joined: Feb 06, 2023 3:55 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I noticed something fairly similiar with regards to performance.
Recently I needed to copy from REFS to XFS and then the backup chain (now on the XFS volume) needed to be upgraded. This was around a 6TB chain and on XFS it took around 10 minutes or so t upgrade.
I did the same recently but from REFS to another REFS Volume (again the chain needed to be upgraded) and this time the chain was only 3TB, however this took over an hour.
Recently I needed to copy from REFS to XFS and then the backup chain (now on the XFS volume) needed to be upgraded. This was around a 6TB chain and on XFS it took around 10 minutes or so t upgrade.
I did the same recently but from REFS to another REFS Volume (again the chain needed to be upgraded) and this time the chain was only 3TB, however this took over an hour.
-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
23 TB but thats not the problem. The change rate is ~2,7 TB per day so after 7 days in theory 19 TB of that could be changed (most likely many of these changes are in the same locations, but the synthetic operation has to go through all that change information).
-
- Veeam Software
- Posts: 143
- Liked: 38 times
- Joined: Jul 28, 2022 12:57 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Hello,
Some numbers from the field on production environments in V12 CP2 on xfs SOBR (4 extents/ 2 Apollo GEN4510):
Backup jobs, High Priority, Synthetic Full
900 VMs on full flash array (hitachi)
Duration 2h
Processed 130TB~
Read (cbt): 3TB~
Transferred: 1TB~
Low bottleneck on Proxy
Numbers seem ok from my POV
Some numbers from the field on production environments in V12 CP2 on xfs SOBR (4 extents/ 2 Apollo GEN4510):
Backup jobs, High Priority, Synthetic Full
900 VMs on full flash array (hitachi)
Duration 2h
Processed 130TB~
Read (cbt): 3TB~
Transferred: 1TB~
Low bottleneck on Proxy
Numbers seem ok from my POV
Bertrand / TAM EMEA
-
- Service Provider
- Posts: 234
- Liked: 40 times
- Joined: Mar 08, 2010 4:05 pm
- Full Name: John Borhek
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I can subjectively/observationally confirm these results. Across many different jobs/servers XFS always seems to run much faster, plus it can be immutable!
-JB
-JB
John Borhek, Solutions Architect
https://vmsources.com
https://vmsources.com
-
- Enthusiast
- Posts: 54
- Liked: 18 times
- Joined: Feb 02, 2015 1:51 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
For someone that didn't want to learn how to do Windows Server without GUI for the SAC-Releases with the latest ReFS-Patches, I'm glad I went the XFS Way. Especially seeing these performance numbers. Our incrementals even show "source" as a bottleneck when it's an Dorado 5000v6 All-Flash system, which laughs at almost anything I've yet to throw at it...
The plus side: I can sell immutability to the higher-ups.
The minus side: Can't have immutability and Proxy or tape service on the same hardware. Bummer if it originally was sized for windows (see above)...
The plus side: I can sell immutability to the higher-ups.
The minus side: Can't have immutability and Proxy or tape service on the same hardware. Bummer if it originally was sized for windows (see above)...
-
- Enthusiast
- Posts: 57
- Liked: 5 times
- Joined: Jun 25, 2018 3:41 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I should look into XFS morebct44 wrote: ↑May 22, 2023 8:32 am Hello,
Some numbers from the field on production environments in V12 CP2 on xfs SOBR (4 extents/ 2 Apollo GEN4510):
Backup jobs, High Priority, Synthetic Full
900 VMs on full flash array (hitachi)
Duration 2h
Processed 130TB~
Read (cbt): 3TB~
Transferred: 1TB~
Low bottleneck on Proxy
Numbers seem ok from my POV
I checked one my ReFS synthetics from last month and it seems ok, I've been trying to make it faster:
Processed: 48.4TB
Read: 2.7TB
Transferred: 1.4TB
Synthetic full backup created successfully [fast clone] 39:38 with my target (Flash Array) as the bottleneck apparently.
-
- Influencer
- Posts: 23
- Liked: 4 times
- Joined: Apr 16, 2015 11:25 am
- Full Name: Hauke Ihnen
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
We moved from ReFS to XFS too.
We now have much higher performance during backups and restore, especially on synthetic fulls, it's using much less disk space - and it's rock solid. Fire and forget, never had issues with XFS.
On the hardware side there is no need for expensive hardware, old server hardware is good to go. No even much memory required.
A big downside of ReFS was the performance drop over the month, at the beginning it was fast too - but every week it dropped more and more.
We now have much higher performance during backups and restore, especially on synthetic fulls, it's using much less disk space - and it's rock solid. Fire and forget, never had issues with XFS.
On the hardware side there is no need for expensive hardware, old server hardware is good to go. No even much memory required.
A big downside of ReFS was the performance drop over the month, at the beginning it was fast too - but every week it dropped more and more.
-
- Novice
- Posts: 9
- Liked: 7 times
- Joined: Aug 18, 2016 6:16 pm
- Full Name: Bert
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I'll stack one more confirmation for XFS on the pile. We switched for the immutability, but both performance and storage savings have been impressive. I thought the performance was because we increased disk count in the new server, but reading this thread makes a lot of sense with the performance we've seen. The really nice part for us is the efficient space savings. With REFS, synthetic fulls always seemed to take up more space than they should. XFS doesn't appear to struggle with that problem.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Feb 15, 2023 9:41 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Did you uninstall Windows Defender and disable other fs filters before running the benchmarks?
-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
We have AV on all our systems (Linux and Windows) but not Defender
-
- Veeam Legend
- Posts: 403
- Liked: 231 times
- Joined: Apr 11, 2023 1:18 pm
- Full Name: Tyler Jurgens
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
How were the ReFS and XFS volumes built? (RAID 5/6/10, other?)
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
-
- Influencer
- Posts: 16
- Liked: 8 times
- Joined: Apr 26, 2021 3:18 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I'm curious about this as well. We're looking to migrate from ntfs to XFS and currently leveraging a raid 5 array.
-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
We use purely RAID 6 (14+2). The Windows System uses dynamic volumes to aggregate the volumes to multiple large filesystems.
Linux is much more flexible with LVM which takes the RAID volumes and put them all in one big filesystem. As an optimization we always stripe over 4 RAIDs (-i 4), which increases performance further but still makes it possible to replace the RAIDs without downtime (i believe we have to replace a minimum of 4 RAIDs because of the striping).
Linux is much more flexible with LVM which takes the RAID volumes and put them all in one big filesystem. As an optimization we always stripe over 4 RAIDs (-i 4), which increases performance further but still makes it possible to replace the RAIDs without downtime (i believe we have to replace a minimum of 4 RAIDs because of the striping).
-
- Influencer
- Posts: 23
- Liked: 4 times
- Joined: Apr 16, 2015 11:25 am
- Full Name: Hauke Ihnen
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
For us: all backup storage devices use Raid-6, WD Gold harddisks. No LVM or partitions, XFS directly to the raid volume. KISS.tjurgens-s2d wrote: ↑May 24, 2023 5:15 pm How were the ReFS and XFS volumes built? (RAID 5/6/10, other?)
-
- Novice
- Posts: 6
- Liked: never
- Joined: May 21, 2023 4:36 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Then should not refs be better with those newer refs APIs? 100TB RAID 10 Rust Here BTW, nothing crazy.Mildur wrote: ↑May 21, 2023 7:17 pm It is an API/feature the filesystem provides.
Description for reFS:
https://learn.microsoft.com/en-us/windo ... ck-cloning
Veeam can leverage that API for different operations:
https://helpcenter.veeam.com/docs/backu ... ml?ver=120
Best,
Fabian
-
- Veeam Legend
- Posts: 403
- Liked: 231 times
- Joined: Apr 11, 2023 1:18 pm
- Full Name: Tyler Jurgens
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
I was curious what the OP was using, because different RAID levels can have drastic impact on performance.
When we were testing new block storage device setup (all else being equal), we found hardware (LSI) RAID 10 to be the best performance in both reads/writes compared to a RAID 60. As a service provider ingesting backups from many customers, we needed the performance to be as good as possible, even if the redundancy isn't as good as a RAID 60.
XFS did outperform ReFS in our cases as well. Additionally, if you aren't following Microsoft's hardware ACLs for OS + RAID controller compatibility, you can have a bad time(TM) when using ReFS. We chose to move away from ReFS for multiple reasons. Getting the added ability to utilize immutability was a huge win as well.
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
-
- Service Provider
- Posts: 37
- Liked: 12 times
- Joined: May 19, 2021 1:40 pm
- Full Name: Francis Brodeur
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
XFS for the win !! ReFS is good for small customer only with less than 10TB and but not suited for large deployment, as per our experience.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Feb 07, 2022 10:14 pm
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
To all using ReFS: what erasure coding/RAID were you using? Write performance with ReFS is well-known to be quite poor (near order of magnitude poor) for parity, and Microsoft pretty much recommends not to use it other than for archives. They recommend 2x or 3x mirrored for production (or mirror-accelerated parity).
About to set up a new system and on the fence about XFS or ReFS. My initial XFS PoC was disappointing, but this thread has me reconsidering. The poor ReFS performance many are seeing would make a lot more sense if it's parity, though.
About to set up a new system and on the fence about XFS or ReFS. My initial XFS PoC was disappointing, but this thread has me reconsidering. The poor ReFS performance many are seeing would make a lot more sense if it's parity, though.
-
- Veeam Legend
- Posts: 1199
- Liked: 415 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
As stated previously, RAID6(0) on all systems. XFS should be just as "bad" as ReFS - but it is not really...
-
- Veeam ProPartner
- Posts: 59
- Liked: 40 times
- Joined: Jan 08, 2013 4:26 pm
- Full Name: Falk
- Location: Germany
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
My experience with Raid6 (60) is not so bad.tjurgens-s2d wrote: ↑May 25, 2023 2:58 pm I was curious what the OP was using, because different RAID levels can have drastic impact on performance.
When we were testing new block storage device setup (all else being equal), we found hardware (LSI) RAID 10 to be the best performance in both reads/writes compared to a RAID 60. As a service provider ingesting backups from many customers, we needed the performance to be as good as possible, even if the redundancy isn't as good as a RAID 60.
XFS did outperform ReFS in our cases as well. Additionally, if you aren't following Microsoft's hardware ACLs for OS + RAID controller compatibility, you can have a bad time(TM) when using ReFS. We chose to move away from ReFS for multiple reasons. Getting the added ability to utilize immutability was a huge win as well.
With current controllers (e.g. Megaraid 95xx) and installed battery cache, the performance is also in writing at about 80% of the Raid0 level.
Without battery cache, the write performance is of course very poor.
-
- Expert
- Posts: 214
- Liked: 60 times
- Joined: Feb 18, 2013 10:45 am
- Full Name: Stan G
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
Do you have any source about ReFS only to be used for archives?contrebombarde wrote: ↑May 29, 2023 4:50 am ... and Microsoft pretty much recommends not to use it other than for archives. ....
Because AFAIK they recommend it for Exchange Databases and I've read mixed opinions for Hyper-V virtual machine storage.
-
- Veeam ProPartner
- Posts: 59
- Liked: 40 times
- Joined: Jan 08, 2013 4:26 pm
- Full Name: Falk
- Location: Germany
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
ReFS is the recommended file system for HyperV and in HCI environments with S2D, ReFS is the only supported file system.
ReFS with HyperV works perfectly and I often have much better performance than NTFS.
For backups, however, we have the comparison between ReFS and XFS, where XFS performs significantly better.
ReFS with HyperV works perfectly and I often have much better performance than NTFS.
For backups, however, we have the comparison between ReFS and XFS, where XFS performs significantly better.
-
- Veeam Legend
- Posts: 403
- Liked: 231 times
- Joined: Apr 11, 2023 1:18 pm
- Full Name: Tyler Jurgens
- Contact:
Re: ReFS vs. XFS - a small real world synthetic performance comparison
We ran through a gambit of tests (which I no longer have access to since I have changed jobs) using the methods outlined here:SkyDiver79 wrote: ↑May 30, 2023 10:41 am My experience with Raid6 (60) is not so bad.
With current controllers (e.g. Megaraid 95xx) and installed battery cache, the performance is also in writing at about 80% of the Raid0 level.
Without battery cache, the write performance is of course very poor.
https://www.veeam.com/kb2014
The LSI "RAID 10" performed significantly faster.
In actuality it was a RAID 100. LSI can only do 8 RAID spans, but we had 36 disks in the enclosure. A RAID 10 has only 2 disks in each span, hence if we went pure RAID 10 we couldn't consume as many disks as we wanted to for a single logical disk (8x2 = 16, which leaves us either creating yet another logical disk, or doing something different, see below).
However, the way we made it work with the LSI card (98xx series) was to create 8 spans of 4 disks each. Essentially LSI would create two raid 10's within that 4 disk span, and then stripe that across all the disk spans (Eg: RAID 100). 4 hot spares per enclosure. Linux Hardened Repositories. We ran through tests using more disks per span and fewer spans as well, but the fewer disks per span, the better.
We could see performance of nearly 3 Gbps using 7200 RPM spinning disk. Unfortunately I don't remember the exact numbers for reads/writes, but it was in orders of magnitude faster than RAID 60 when we tested using the same methods outlined in that KB above.
Tyler Jurgens
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
Veeam Legend x3 | vExpert ** | VMCE | VCP 2020 | Tanzu Vanguard | VUG Canada Leader | VMUG Calgary Leader
Blog: https://explosive.cloud
Twitter: @Tyler_Jurgens BlueSky: @explosive.cloud
Who is online
Users browsing this forum: chris.childerhose, Google [Bot], nathanrsafti and 68 guests