-
- Service Provider
- Posts: 171
- Liked: 13 times
- Joined: Jun 29, 2013 12:14 pm
- Full Name: Peter Enoch
- Contact:
REFS or NTFS?
Hi all,
Again we got issues with REFS
Running current Windows 2016 with newest Windows Updates - about one week ago the server (DELL PowerEdge R730XD) got error of PSU and rebooted the server.
After this, one of the two REFS drives is shown as RAW device (70 TB). Tried many recovery different tools, but currently no luck - if any tools is recommend please write..
This is one of many times with issues with the REFS system, lot of problems back in time with REFS driver and now this
I also have issues with using REFS, because it's currently impossible to recreate the block cloning copy from another respository which have good data.
So i'm thinking there is two options for me: Windows 2016 with NTFS and Dedupe enabled or Windows 2019 with NTFS/dedupe or REFS/dedupe. If REFS with Dedupe (Windows 2019) cannot use Fast Clone (Block Cloning) then NTFS with Dedupe should be best option?
Is Windows 2019 supported when Update 4 is released?
Again we got issues with REFS
Running current Windows 2016 with newest Windows Updates - about one week ago the server (DELL PowerEdge R730XD) got error of PSU and rebooted the server.
After this, one of the two REFS drives is shown as RAW device (70 TB). Tried many recovery different tools, but currently no luck - if any tools is recommend please write..
This is one of many times with issues with the REFS system, lot of problems back in time with REFS driver and now this
I also have issues with using REFS, because it's currently impossible to recreate the block cloning copy from another respository which have good data.
So i'm thinking there is two options for me: Windows 2016 with NTFS and Dedupe enabled or Windows 2019 with NTFS/dedupe or REFS/dedupe. If REFS with Dedupe (Windows 2019) cannot use Fast Clone (Block Cloning) then NTFS with Dedupe should be best option?
Is Windows 2019 supported when Update 4 is released?
-
- Veteran
- Posts: 3077
- Liked: 455 times
- Joined: Aug 07, 2018 3:11 pm
- Full Name: Fedor Maslov
- Contact:
Re: REFS or NTFS?
Hi Peter,
Currently, I cannot recommend anything regarding REFS situation, but will definitely check with the teams what could a possible solution for this situation. Regarding WServer 2019, yes it will be supported by B&R 9.5 Update 4.
Thanks
Currently, I cannot recommend anything regarding REFS situation, but will definitely check with the teams what could a possible solution for this situation. Regarding WServer 2019, yes it will be supported by B&R 9.5 Update 4.
Thanks
-
- Service Provider
- Posts: 171
- Liked: 13 times
- Joined: Jun 29, 2013 12:14 pm
- Full Name: Peter Enoch
- Contact:
Re: REFS or NTFS?
Thanks looking forward for what the teams have to say.
If anyone else have any recommended recovery way to get the REFS drive from RAW to "normal" state i'll be very happy
If anyone else have any recommended recovery way to get the REFS drive from RAW to "normal" state i'll be very happy
-
- Enthusiast
- Posts: 52
- Liked: never
- Joined: Oct 28, 2015 9:36 pm
- Full Name: Joe Brancaleone
- Contact:
Re: REFS or NTFS?
I don't have any info on ReFS, but regarding NTFS with dedupe on, that caused a huge issue in one of our repositories with a large file size (albeit in Win 2012 R2 -- could not merge full backup successfully iirc, so a new backup chain had to be created). I don't know whether that limitation is resolved in Windows 2016 / 2019 but if anyone else knows...
-
- Enthusiast
- Posts: 52
- Liked: 2 times
- Joined: Sep 20, 2010 4:39 am
- Full Name: David Reimers
- Contact:
Re: REFS or NTFS?
My 2c worth based on recent experience with ReFS. I'd been advised by a partner organisation that ReFS is now 'their standard filesystem for Veeam deployments'. So I advised one of my clients do to the same when they were recently building a new Veeam server (50+ TB storage, Windows 2016).
Initially we found a noticeable improvement in backup speeds (write operations, fast-clone natively enabled, reversed-incremental jobs, ESX 6.5 U1, etc). Comparable sites showed similar improvements.
Here's where it gets interesting. The client still uses Arcserve to backup VBK files to tape (as well as various ad-hoc filesystem backups of other Windows serves). He does this because of the file-level restore issues with Veeam (known issues, documented elsewhere on the forums).
His initial backup speeds were very fast - 12+GB/min, as expected for LTO7. All was running well, until several weeks later he noticed that backup speeds had dropped to 6GB/min - except for one Veeam backup chain which, after the end-of-week run, went back to the full 12GB/min - but only for the initial run.
After some fruitless engagement with both Arcserve and Veeam support, the client asked me to look at the problem. Some initial research pointed to read speeds with ReFS being potentially degraded over time. We proved this was the case by moving an existing backup chain (i.e. VBK + VBRs) to a new NTFS-formatted volume on an new server, and backing it up with the same LTO7 drive (re-attached to the new server). We got the full 12GB/min.
I did further research and discovered that due to the way ReFS works, it appears that the underlying on-disk storage of the VBK files gets fragmented, resulting in poor sequential read performance, as ReFS moves the pointers rather than the blocks when doing reverse-incremental merge operations. Hence the blocks are now no longer sequential as they were when first written.
The client bit the bullet and reformatted the ReFS volume as NTFS, and performance is now back to normal.
Initially we found a noticeable improvement in backup speeds (write operations, fast-clone natively enabled, reversed-incremental jobs, ESX 6.5 U1, etc). Comparable sites showed similar improvements.
Here's where it gets interesting. The client still uses Arcserve to backup VBK files to tape (as well as various ad-hoc filesystem backups of other Windows serves). He does this because of the file-level restore issues with Veeam (known issues, documented elsewhere on the forums).
His initial backup speeds were very fast - 12+GB/min, as expected for LTO7. All was running well, until several weeks later he noticed that backup speeds had dropped to 6GB/min - except for one Veeam backup chain which, after the end-of-week run, went back to the full 12GB/min - but only for the initial run.
After some fruitless engagement with both Arcserve and Veeam support, the client asked me to look at the problem. Some initial research pointed to read speeds with ReFS being potentially degraded over time. We proved this was the case by moving an existing backup chain (i.e. VBK + VBRs) to a new NTFS-formatted volume on an new server, and backing it up with the same LTO7 drive (re-attached to the new server). We got the full 12GB/min.
I did further research and discovered that due to the way ReFS works, it appears that the underlying on-disk storage of the VBK files gets fragmented, resulting in poor sequential read performance, as ReFS moves the pointers rather than the blocks when doing reverse-incremental merge operations. Hence the blocks are now no longer sequential as they were when first written.
The client bit the bullet and reformatted the ReFS volume as NTFS, and performance is now back to normal.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: REFS or NTFS?
We are in the process to move back to NTFS for the very same reason. ReFS is just too unpredictable. We also had very slow restore speeds over time because of block-cloning. But we restore only every now and then since we have a stable environment. But the worst issue is we had ReFS corruption, ans then a 3TB vbk file got flagged as unavailable to the namespace. File not accessible anymore, backup chain corrupted, no tools available to fix ReFS issues, and the 3TB was still in use on the disk, even when the file is not accessible anymore.
https://social.technet.microsoft.com/Fo ... b379b1fd8b
https://social.technet.microsoft.com/Fo ... ba3992448a
MS has practically no documentation on ReFS troubleshooting, only on how good it should be. We'll move back to NTFS which is much more mature. If we run into space issues we could use dedup, but we have much more control that way. Like dedup only files that are over a week old, to keep the last backups quick.
No ReFS for us anymore until it matures!
https://social.technet.microsoft.com/Fo ... b379b1fd8b
https://social.technet.microsoft.com/Fo ... ba3992448a
MS has practically no documentation on ReFS troubleshooting, only on how good it should be. We'll move back to NTFS which is much more mature. If we run into space issues we could use dedup, but we have much more control that way. Like dedup only files that are over a week old, to keep the last backups quick.
No ReFS for us anymore until it matures!
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: REFS or NTFS?
Can't edit my post above, so I post a new one. I want to add that we do NOT use storage-spaces direct, but just a 'stand alone' ReFS volume. I fully understand how a mirrored S2D or one with one or multiple parities can help repair corruption, but while 'everyone' says storage is cheap, for is as a very small company it isn't. So we run RAID6 on some proper hardware for our main and remote backup repositories.
Having said that, there's plenty of posts on Technet and all around the web, about how stable S2D is, and how much issues it caused people. Now I understand it improves (or at least it's supposed to) with each itteration, but so far I won't prefer any Microsoft software storage solution over our SAN's that are actually designed for the task.
Having said that, there's plenty of posts on Technet and all around the web, about how stable S2D is, and how much issues it caused people. Now I understand it improves (or at least it's supposed to) with each itteration, but so far I won't prefer any Microsoft software storage solution over our SAN's that are actually designed for the task.
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: REFS or NTFS?
That's very interesting,what RAID level was used for this repository? This sounds very much like the documented performance of Data Domain, and other dedupe appliances which are hurt by random reads.DavidReimers wrote: ↑Jan 07, 2019 4:35 am I did further research and discovered that due to the way ReFS works, it appears that the underlying on-disk storage of the VBK files gets fragmented, resulting in poor sequential read performance, as ReFS moves the pointers rather than the blocks when doing reverse-incremental merge operations. Hence the blocks are now no longer sequential as they were when first written.
https://helpcenter.veeam.com/docs/backu ... tml?ver=95
Feature request: Accelerated restore of fragmented ReFS volumes?Dell EMC Data Domain storage systems are optimized for sequential I/O operations. However, data blocks of VM disks in backup files are stored not sequentially, but in the random order. If data blocks of VM disks are read at random, the restore performance from backups on Dell EMC Data Domain degrades.
To accelerate the restore process, Veeam Backup & Replication creates a map of data blocks in backup files. It uses the created map to read data blocks of VM disks from backup files sequentially, as they reside on disk. Veeam Backup & Replication writes data blocks to target in the order in which they come from the target Veeam Data Mover, restoring several VM disks in parallel.
This accelerated restore mechanism is enabled by default, and is used for the entire VM restore scenario.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: REFS or NTFS?
I don't see how you'd want Veeam to improve that; a side effect of either deduplication or ReFS Blockcloning is that files that would have been sequential, now are spread out over your physical disks. Which means random IO for your disks. That's not an issue of Veeam persé, just a side effect of the technology used. I guess the only thing to do is acceleration your storage with SSD's, which is quite costly (at least for our company).
Having said that, this is one of the reasons we ordered a 120TB storage machine with relatively cheap sata drives last week. We will revert to NTFS and keep things really sequential as long as we have enough space. And if not, we turn on dedup but only for files older than x days or weeks, so the most common restores are still quick. For us that's a lot cheaper than SSD storage (you'd need pretty much an all-flash storage to keep dedup / block cloning quick) AND we have the additional benefit of NTFS being much more mature, less error prone, and having much more tooling for that available IF anything still goes wrong.
Having said that, this is one of the reasons we ordered a 120TB storage machine with relatively cheap sata drives last week. We will revert to NTFS and keep things really sequential as long as we have enough space. And if not, we turn on dedup but only for files older than x days or weeks, so the most common restores are still quick. For us that's a lot cheaper than SSD storage (you'd need pretty much an all-flash storage to keep dedup / block cloning quick) AND we have the additional benefit of NTFS being much more mature, less error prone, and having much more tooling for that available IF anything still goes wrong.
-
- Influencer
- Posts: 15
- Liked: 5 times
- Joined: Feb 29, 2016 5:16 pm
- Full Name: Daniel Farrelly
- Contact:
Re: REFS or NTFS?
we use all-flash raid10 ntfs for our primary backup repos. spinning disk raid5 refs for backup copies. synthetic fulls routinely saturate 10g network.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: REFS or NTFS?
Well then you are very lucky your company has that money to spend! I certainly want to, but we just don't have it
Offtopic, but are you sure it's raid5? That's pretty dangerous on it's own on big arrays with nowadays sized disks.
Offtopic, but are you sure it's raid5? That's pretty dangerous on it's own on big arrays with nowadays sized disks.
-
- Enthusiast
- Posts: 64
- Liked: 10 times
- Joined: May 15, 2014 3:29 pm
- Full Name: Peter Yasuda
- Contact:
Re: REFS or NTFS?
Compacting the full backup file might be an option - https://helpcenter.veeam.com/docs/backu ... tml?ver=95RGijsen wrote: ↑Jan 07, 2019 1:37 pm I don't see how you'd want Veeam to improve that; a side effect of either deduplication or ReFS Blockcloning is that files that would have been sequential, now are spread out over your physical disks. Which means random IO for your disks. That's not an issue of Veeam persé, juts a side effect of the technology used. I guess the only thing to do is acceleration your storage with SSD's, which is quite costly (at least for our company).
There are limitations, and the article doesn't address ReFS specifically, but if it doesn't use block cloning, then it would defrag the full backup file. Maybe someone here knows whether block cloning is used?
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: REFS or NTFS?
I am not a filesystems engineer or data structures algorithm expert, but the Data Domain description sounds good to me, assuming it is not already happening for fragmented ReFS repositories:
So reading sequentially as physically on disk, and not sequentially as reported by the file (which is random in the fragemented scenario) sounds pretty good to me.To accelerate the restore process, Veeam Backup & Replication creates a map of data blocks in backup files. It uses the created map to read data blocks of VM disks from backup files sequentially, as they reside on disk.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: REFS or NTFS?
Yes, compact operations can leverage Fast Clone:Maybe someone here knows whether block cloning is used?.
Thanks<...>Operations with Fast Clone
Veeam Backup & Replication leverages Fast Clone for the following synthetic operations:
In backup jobs:
<...>
Compact of full backup file<...>
-
- Enthusiast
- Posts: 52
- Liked: 2 times
- Joined: Sep 20, 2010 4:39 am
- Full Name: David Reimers
- Contact:
Re: REFS or NTFS?
RAID6 with a pretty hefty HPE RAID controller (1GB flash-backed cache) and 6TB drives.evilaedmin wrote: ↑Jan 07, 2019 1:16 pm That's very interesting,what RAID level was used for this repository? This sounds very much like the documented performance of Data Domain, and other dedupe appliances which are hurt by random reads.
-
- Enthusiast
- Posts: 52
- Liked: 2 times
- Joined: Sep 20, 2010 4:39 am
- Full Name: David Reimers
- Contact:
Re: REFS or NTFS?
Which explains why the problem doesn't affect Veeam, but affects other applications reading the same data. In my customer's case, he's using Arcserve to backup the VBK files - so he's then getting the problem of poor random read performance.To accelerate the restore process, Veeam Backup & Replication creates a map of data blocks in backup files. It uses the created map to read data blocks of VM disks from backup files sequentially, as they reside on disk. Veeam Backup & Replication writes data blocks to target in the order in which they come from the target Veeam Data Mover, restoring several VM disks in parallel.
-
- Veteran
- Posts: 391
- Liked: 56 times
- Joined: Feb 03, 2017 2:34 pm
- Full Name: MikeO
- Contact:
-
- Influencer
- Posts: 13
- Liked: 4 times
- Joined: Nov 01, 2018 8:32 pm
- Contact:
Re: REFS or NTFS?
If you want to use deduplication either with ReFS or NTFS, you have to limit your volume size to 64TB.
So far we are using successfully ReFS with dedupe on Windows 2019 and no issues whatsoever.
So far we are using successfully ReFS with dedupe on Windows 2019 and no issues whatsoever.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: REFS or NTFS?
...and also you should keep an eye on the size of your backup files:If you want to use deduplication either with ReFS or NTFS, you have to limit your volume size to 64TB
Deduplication is not supported on:
System or boot volumes
<...>
Files approaching or larger than 1 TB in size.
<...>
-
- Influencer
- Posts: 13
- Liked: 4 times
- Joined: Nov 01, 2018 8:32 pm
- Contact:
Re: REFS or NTFS?
We have oversized backup files (7.4TB) deduped okay on ReFS (3.42TB on disk), doesn't seem to be an issue however the time to dedup files >1TB may be slower than files <1TB and the dedup ration may be lower.
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: REFS or NTFS?
If the file is over 1TB, only the first 1TB of data within the file gets deduplicated.
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: REFS or NTFS?
This is incorrect. First 4TB get deduplicated since WS2016 (unlimited in 2012 and 2012R2).
The limit may have been raised in WS2019 (ReFS only?) if Frenchyaz'es numbers are correct. Please check properties for backup files, is any of them larger than 4TB? In that case size on disk should be smaller than (Size minus 4TB).
Haven't had time to test yet myself.
The limit may have been raised in WS2019 (ReFS only?) if Frenchyaz'es numbers are correct. Please check properties for backup files, is any of them larger than 4TB? In that case size on disk should be smaller than (Size minus 4TB).
Haven't had time to test yet myself.
Who is online
Users browsing this forum: Bing [Bot] and 57 guests