- When I go to mount and format this LUN on the Windows server, what cluster size should I choose?
- We currently backup to a LUN presented to this same Windows 2016 server but this LUN is formatted NTFS. Are there different settings from within Veeam I should be using if the target is ReFS as opposed to NTFS?
- Lastly, is ReFS ready for production? Stable?
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
ReFS for repository, best practice?
I started reading through several very long threads on this forum about using ReFS for a Veeam repository. I had a hard time gleaning current best practice around using ReFS. I have a Nimble CS300 array that I will use to present a LUN to our physical proxy server which is running Windows Server 2016 build 1607.
-
- Product Manager
- Posts: 8044
- Liked: 1263 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS for repository, best practice?
Hi HendersonD,
1. We advice 64k as clustersize
2. No. Once you create a repository on that ReFS lun, veeam will detect it and use it if possible
3. Yes, and no. Yes: All patches applied, enough RAM/CPU to the server and not petabytes of data. Microsoft is still working on some fixes that will come but no idea when. The major issues however seem to be gone
1. We advice 64k as clustersize
2. No. Once you create a repository on that ReFS lun, veeam will detect it and use it if possible
3. Yes, and no. Yes: All patches applied, enough RAM/CPU to the server and not petabytes of data. Microsoft is still working on some fixes that will come but no idea when. The major issues however seem to be gone
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
We backup a total of 5.5TB so the amount is not huge. Our new physical proxy server has 2 CPUs with 16 cores each running at 2.6GHz, 64GB RAM, and 10 gig network connection. It sounds like we are fine here
We use the following settings for our current backup job for Windows VMs to an NTFS formatted volume. When we go to ReFS, none of these should change, correct?
We use the following settings for our current backup job for Windows VMs to an NTFS formatted volume. When we go to ReFS, none of these should change, correct?
- Forever forward incremental with a 30 day retention
- Backup file health check once a month
- Removed deleted items after 14 days and defragment/compact full backup once a month
- Compression level optimal and Storage Optimization set to local target
- Change block tracking enabled
- Backup from storage snapshots enabled since we have a Nimble array
- Application aware processing is enabled
-
- Chief Product Officer
- Posts: 31460
- Liked: 6648 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS for repository, best practice?
You don't need any changes. But you will have more flexibility around backup modes, for example enabling periodic synthetic fulls will not require extra disk space with ReFS. Which in turn removes the need to do periodic defragment/compact (as synthetic full does it implicitly).
Pretty much the only concern when switching to ReFS is the amount of RAM on the backup repository server, which you have plenty for your data size.
Pretty much the only concern when switching to ReFS is the amount of RAM on the backup repository server, which you have plenty for your data size.
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
So it sounds like I should turn off defragment/compact full backup entirely and do a synthentic full. If I have a 30 day retention, how often should I do a synthetic full? Once a week?
-
- Chief Product Officer
- Posts: 31460
- Liked: 6648 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS for repository, best practice?
Most users do it once a week on the weekend.
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
Perfect, thanks for the help. The only trouble we will have is storage space. I need to create a new LUN on our Nimble array and connect it to our physical proxy server. I will format this volume as ReFS and can point backup jobs to it. Unfortunately right now I do not have enough space on our Nimble array to create a volume of the size I need. Of course the current volume on the Nimble array (NTFS on proxy) will completely go away eventually. As I transition I need to have both LUNs in place.
-
- Enthusiast
- Posts: 34
- Liked: 3 times
- Joined: Jan 13, 2015 4:31 am
- Full Name: Jeffrey Michael James
- Location: Texas Tech Univ. TOSM Computer Center, 8th Street & Boston Avenue, Lubbock, TX 79409-3051
- Contact:
Re: ReFS for repository, best practice?
Ok, no need to defrag & compact using REFS & Synthetics. What about storage level protection guard? I am running 64k Cluster REFS on Dell FC Storage. My repos are all SOBR and this is our built out Win & Linux Agent backup system. I thought I remember reading REFS does self healing of corruption - like storage level protection guard. Is there a need for health checks running REFS?
Jeff J
Jeff J
Jeff M
Data Center Operations
Technology Operations & Systems Management
Texas Tech University System
jeff.james@ttu.edu
Data Center Operations
Technology Operations & Systems Management
Texas Tech University System
jeff.james@ttu.edu
-
- Product Manager
- Posts: 8044
- Liked: 1263 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS for repository, best practice?
Jeff,
REFS will find corruption indeed. But self-healing depends on how it is deployed. It can only do self-healing in a storage spaces or storage spaces direct scenario. So keep that in mind. Another reason why we always work with the 3-2-1 rule. So make sure that your primary backups located on the ReFS repository are also being copied somewhere else
REFS will find corruption indeed. But self-healing depends on how it is deployed. It can only do self-healing in a storage spaces or storage spaces direct scenario. So keep that in mind. Another reason why we always work with the 3-2-1 rule. So make sure that your primary backups located on the ReFS repository are also being copied somewhere else
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
I have a Nimble array that I will be using to present a LUN to my physical Veeam proxy server via iSCSI. This will be a mapped drive to this proxy server which runs Windows Server 2016. Under ReFS will "Storage=level corruption guard" work? Under my backup jobs right now using NTFS I have the "Perform backup files health check" checked and schedule of once a month.
-
- Veteran
- Posts: 404
- Liked: 106 times
- Joined: Jan 30, 2017 9:23 am
- Full Name: Ed Gummett
- Location: Manchester, United Kingdom
- Contact:
Re: ReFS for repository, best practice?
One more thing to bear in mind if you’re using SAN storage for repos: unmap isn’t available for ReFS. So your volume will become thick-provisioned over time and you will not be able to reclaim unused space.
Ed Gummett (VMCA)
Senior Specialist Solutions Architect, Storage Technologies, AWS
(Senior Systems Engineer, Veeam Software, 2018-2021)
Senior Specialist Solutions Architect, Storage Technologies, AWS
(Senior Systems Engineer, Veeam Software, 2018-2021)
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
Without unmap is ReFS even a good choice for repos? We keep 30 days worth of backups through Veeam but it sounds like over a 6-8 month time period we could actually fill the repo is the space that was used for backups that are long gone (older than 30 days) is never freed up
-
- Service Provider
- Posts: 4
- Liked: 6 times
- Joined: Apr 05, 2017 9:34 pm
- Full Name: Ken Barhite
- Contact:
Re: ReFS for repository, best practice?
Should compression level be set to "Dedupe friendly". I thought I read somewhere that is recommended for ReFS.
-
- Product Manager
- Posts: 8044
- Liked: 1263 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS for repository, best practice?
@Ken,
I actually see no reason to use Dedupe Friendly when you land you backups on a ReFS repository. The recommendation would be to create a full backup there first, then incrementals with synthetic full in between (as Gostev said, once a week is used a lot).
@HendersonD: I misread something yesterday. We do advice to leave storage-level corruption guard on. It might look like overkill but hey... They are your backup files . Only when you run it in S2D or classic storage spaces I think it is not needed
I actually see no reason to use Dedupe Friendly when you land you backups on a ReFS repository. The recommendation would be to create a full backup there first, then incrementals with synthetic full in between (as Gostev said, once a week is used a lot).
@HendersonD: I misread something yesterday. We do advice to leave storage-level corruption guard on. It might look like overkill but hey... They are your backup files . Only when you run it in S2D or classic storage spaces I think it is not needed
-
- Expert
- Posts: 158
- Liked: 8 times
- Joined: Jul 23, 2011 12:35 am
Re: ReFS for repository, best practice?
I have just about everything configured on my ReFS repository and my new backup jobs that point towards this repository. Just need help with two last settings
- Under repository settings should I enable "Use per-VM backup files"? I have a Nimble storage array where I have carved out a LUN. This LUN is presented to my physical proxy server running Windows Server 2016. After formatting as ReFS with 64K block size it shows as the F: drive on this server. There are two iSCSI 10 gig connections from the proxy server to this datastore
- On the backup job under Storage...Advanced...Storage tab is the compression level setting. The choices are None, Dedup-friendly, Optimal (recommended), High, and Extreme. Which one should I choose? I am not using the dedup built into our proxy server under Windows Server 2016
-
- Product Manager
- Posts: 8044
- Liked: 1263 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: ReFS for repository, best practice?
1. You can choose. What will happen: If you have a job with (as an example) 5 servers in it, with per-VM backup files you will have 5 separate (smaller) backup files in the repository, + the chain of incrementals for each of them. Without it, you get one big file + the chain of incrementals for all of them.
2. You can go for Optimal. I believe that is the best. Most settings are "auto-configured" because Veeam will see this repository as a ReFS repository
2. You can go for Optimal. I believe that is the best. Most settings are "auto-configured" because Veeam will see this repository as a ReFS repository
-
- Service Provider
- Posts: 24
- Liked: never
- Joined: Jan 11, 2012 4:22 pm
- Full Name: Alex
- Contact:
[MERGED] REFS Block Clone - recommended job compression settings
Hi - I do apologize if this has been answered already, but having done an admittedly quick search nothing obvious jumped out.
Essentially, if you're using REFS Block Clone to achieve space savings in a Backup Copy Job, are there any recommended settings for the compression type?
For example, if we choose 'Extreme' or 'High' compression in order to maximize free disk space, will the REFS block clone still be able to match up unchanged blocks when the Weekly/Monthly long term archive points are created?
Or are we required to choose 'none' or 'dedupe-friendly' compression levels as per the guidance for Using Windows 2012/2016 Dedupe?
Essentially, if you're using REFS Block Clone to achieve space savings in a Backup Copy Job, are there any recommended settings for the compression type?
For example, if we choose 'Extreme' or 'High' compression in order to maximize free disk space, will the REFS block clone still be able to match up unchanged blocks when the Weekly/Monthly long term archive points are created?
Or are we required to choose 'none' or 'dedupe-friendly' compression levels as per the guidance for Using Windows 2012/2016 Dedupe?
-
- Veeam Software
- Posts: 21069
- Liked: 2115 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: ReFS for repository, best practice?
Compression level doesn't affect block clone ReFS capabilities.
-
- Influencer
- Posts: 22
- Liked: 3 times
- Joined: Aug 06, 2019 2:02 am
- Full Name: Nathan Shaw
- Contact:
Re: ReFS for repository, best practice?
Old thread, I know, but does this comment mean that forward forever incremental (no active full, no periodic synthetic) will lead to fragmentation, and that the merge that happens when you reach your disk days retention limit doesn’t create a new file, and thus can become fragmented?Gostev wrote: ↑Aug 07, 2018 3:54 pm You don't need any changes. But you will have more flexibility around backup modes, for example enabling periodic synthetic fulls will not require extra disk space with ReFS. Which in turn removes the need to do periodic defragment/compact (as synthetic full does it implicitly).
-
- Veeam Software
- Posts: 21069
- Liked: 2115 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: ReFS for repository, best practice?
Yes, merges into a single full in case of NTFS result in its fragmentation.
Who is online
Users browsing this forum: Baidu [Spider] and 259 guests