-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Jan 17, 2019 10:09 am
- Full Name: Fabien ROUSSILLON
- Location: Paris, FRANCE
- Contact:
ReFS and Per-VM Backup Files
Hello,
I search to improve performances of backup and restore in VBR.
I would like to know if I can use Per-VM Backup Files option on a disk ReFS of 50To ?
I only use ReFS for the Fast Clone technology and not for deduplication because Veeam already do it.
I have about 160 VM. What is better ?
Fabien
I search to improve performances of backup and restore in VBR.
I would like to know if I can use Per-VM Backup Files option on a disk ReFS of 50To ?
I only use ReFS for the Fast Clone technology and not for deduplication because Veeam already do it.
I have about 160 VM. What is better ?
Fabien
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS and Per-VM Backup Files
Hello, per-VM is always a better choice from performance perspective, regardless of backup storage. Thanks!
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Jan 17, 2019 10:09 am
- Full Name: Fabien ROUSSILLON
- Location: Paris, FRANCE
- Contact:
Re: ReFS and Per-VM Backup Files
Thank you for your answer.
Must I fix a limit of concurrent tasks if I active this option ?
Should I do a full backup after that ?
Must I fix a limit of concurrent tasks if I active this option ?
Should I do a full backup after that ?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: ReFS and Per-VM Backup Files
Hi Fabien, the max concurrent tasks limit is not affected by this setting. Yes, active full is required for the setting to take effect.
-
- Influencer
- Posts: 13
- Liked: 1 time
- Joined: Jan 17, 2019 10:09 am
- Full Name: Fabien ROUSSILLON
- Location: Paris, FRANCE
- Contact:
Re: ReFS and Per-VM Backup Files
Hello, so if I have 160 VM with 30 restore points, i will have 4800 files in my backup folder ?
Do you think that it will be better because my disk is formatted in ReFS 64k, not in 4k. This will not impact synthetic Full Backup with the Fast Clone ?
Do you think that it will be better because my disk is formatted in ReFS 64k, not in 4k. This will not impact synthetic Full Backup with the Fast Clone ?
-
- Chief Product Officer
- Posts: 31816
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: ReFS and Per-VM Backup Files
Correct.
64KB is the recommended block size.
64KB is the recommended block size.
-
- Novice
- Posts: 7
- Liked: 3 times
- Joined: Sep 22, 2014 7:51 am
- Full Name: Neil V. Bell
- Contact:
Re: ReFS and Per-VM Backup Files
Hi all,
nanii-60, one thing you mentioned about not using REFS for dedupe because Veeam already does it: my understanding is that a per-VM backup won;t dedupe as efficiently because it can only dedupe within each VM backup file. It can't dedupe across VMs and also I don't believe that it will dedupe across previous files, even of the same VM. Can someone confirm whether this is the case?
That said, probably still the correct choice. I have two layers of backup repositories and jobs:
Primary backups, that are a month-worth of forever-incremental daily backups, stored in RAID10 64Kb REFS repositories (per-VM repo settings, although probably doesn't need to be) and benefit from great fast-clone speeds when running file merge.
Secondary "archive" backups, that are my month and year-end (plus most recent 7 days, just because I like to play it safe with an extra set of most useful backups), stored in RAID6 (yes, I would prefer more RAID10 but the cost efficiencies are hard to ignore) per-VM repositories, run as backup-copies from my primary backups and stored at a separate site. Most of these repositories are NTFS and set to dedupe (and the efficiencies are amazing) but I have to suffer some VERY long file merges. I have a few that are REFS for the jobs that really matter and suffer most from file merge times impacting next jobs but I then lose the dedupe efficiencies of keeping almost identical month and year-ends. The NTFS dedupe is why I moved to per-VM as the file sizes are then smaller and can dedupe faster. I am eagerly awaiting final confirmation (from Veeam, because I trust them more than MS) that Server 2019 REFS can really do reliable block-clone AND dedupe but not holding my breath as this has been a long-slow and disappointing road so far.
The one thing that I wish we could have is a system that could utilise block-clone on my backup copy month and year-ends so that I would effectively get the great space-savings of synthetic fulls whilst still keeping my primary and archive backups separate.
Kind regards,
Neil
nanii-60, one thing you mentioned about not using REFS for dedupe because Veeam already does it: my understanding is that a per-VM backup won;t dedupe as efficiently because it can only dedupe within each VM backup file. It can't dedupe across VMs and also I don't believe that it will dedupe across previous files, even of the same VM. Can someone confirm whether this is the case?
That said, probably still the correct choice. I have two layers of backup repositories and jobs:
Primary backups, that are a month-worth of forever-incremental daily backups, stored in RAID10 64Kb REFS repositories (per-VM repo settings, although probably doesn't need to be) and benefit from great fast-clone speeds when running file merge.
Secondary "archive" backups, that are my month and year-end (plus most recent 7 days, just because I like to play it safe with an extra set of most useful backups), stored in RAID6 (yes, I would prefer more RAID10 but the cost efficiencies are hard to ignore) per-VM repositories, run as backup-copies from my primary backups and stored at a separate site. Most of these repositories are NTFS and set to dedupe (and the efficiencies are amazing) but I have to suffer some VERY long file merges. I have a few that are REFS for the jobs that really matter and suffer most from file merge times impacting next jobs but I then lose the dedupe efficiencies of keeping almost identical month and year-ends. The NTFS dedupe is why I moved to per-VM as the file sizes are then smaller and can dedupe faster. I am eagerly awaiting final confirmation (from Veeam, because I trust them more than MS) that Server 2019 REFS can really do reliable block-clone AND dedupe but not holding my breath as this has been a long-slow and disappointing road so far.
The one thing that I wish we could have is a system that could utilise block-clone on my backup copy month and year-ends so that I would effectively get the great space-savings of synthetic fulls whilst still keeping my primary and archive backups separate.
Kind regards,
Neil
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: ReFS and Per-VM Backup Files
Your considerations are correct. Moreover, Veeam B&R uses large block sizes, so its dedupe cannot compare with the Windows native one.nanii-60, one thing you mentioned about not using REFS for dedupe because Veeam already does it: my understanding is that a per-VM backup won;t dedupe as efficiently because it can only dedupe within each VM backup file. It can't dedupe across VMs and also I don't believe that it will dedupe across previous files, even of the same VM. Can someone confirm whether this is the case?
Who is online
Users browsing this forum: Baidu [Spider], Google [Bot], Majestic-12 [Bot], Semrush [Bot] and 92 guests