-
- Novice
- Posts: 7
- Liked: 1 time
- Joined: Aug 31, 2016 1:30 pm
- Full Name: Gary Martin
- Contact:
Transition to ReFS Repositories
Hi,
I am looking at moving my Veeam backups to a new Windows 2016 repository server that I have created. I have a few barriers to a straight migration (copy the existing files to the new repository and get the benefit from new backups after a full).
The barrier is that the current repository is using duplication and the backup target I am using doesn't have enough free space to hydrate the backup onto a ReFS datastore (no block magic until it actually runs the next full). This means I need to transition from the current repository to the new one.
To me, the best way of achieving that would be to use Scale-out repositories (my target is already scale out as with deduplication I was using one repository for fulls with with deduplication enabled and another for incremental without deduplication). I just add my new ReFS repository to the Scale-out group, change policy to "Data locality".
The problem now is, how do I get it to stop using the old repositories for new data? I am guess that when free space on the new repository is below what is available on the old ones it will start to place data there. Once all the configured recovery points are on the new repository I want to retire the old ones, but I need to get there first.
Any suggestions? I have thought about leaving them in maintenance mode, but I have other members of the business who need to be able to recover (needs repository online). I thought about creating a dummy proxy with no backups assigned and changing the old repositories to have an affinity to that.
If there is a better way, please share.
Thanks
Gary
I am looking at moving my Veeam backups to a new Windows 2016 repository server that I have created. I have a few barriers to a straight migration (copy the existing files to the new repository and get the benefit from new backups after a full).
The barrier is that the current repository is using duplication and the backup target I am using doesn't have enough free space to hydrate the backup onto a ReFS datastore (no block magic until it actually runs the next full). This means I need to transition from the current repository to the new one.
To me, the best way of achieving that would be to use Scale-out repositories (my target is already scale out as with deduplication I was using one repository for fulls with with deduplication enabled and another for incremental without deduplication). I just add my new ReFS repository to the Scale-out group, change policy to "Data locality".
The problem now is, how do I get it to stop using the old repositories for new data? I am guess that when free space on the new repository is below what is available on the old ones it will start to place data there. Once all the configured recovery points are on the new repository I want to retire the old ones, but I need to get there first.
Any suggestions? I have thought about leaving them in maintenance mode, but I have other members of the business who need to be able to recover (needs repository online). I thought about creating a dummy proxy with no backups assigned and changing the old repositories to have an affinity to that.
If there is a better way, please share.
Thanks
Gary
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: Transition to ReFS Repositories
Gary,
Is it a Windows deduplication repository?
Mike
Is it a Windows deduplication repository?
Mike
-
- Novice
- Posts: 7
- Liked: 1 time
- Joined: Aug 31, 2016 1:30 pm
- Full Name: Gary Martin
- Contact:
Re: Transition to ReFS Repositories
Hi Mike,
Yes it is Windows (2012 R2).
Gary
Yes it is Windows (2012 R2).
Gary
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: Transition to ReFS Repositories
Hey Gary,
There is a cmdlets called Expand-DataDedupFile which will allow you to expand optimized files one by one in a controlled manner and then move them off to the new ReFS repository. It might be worth working with this one and not go through many hoops. Is that something that might help you to do it in a controlled manner?
Mike
There is a cmdlets called Expand-DataDedupFile which will allow you to expand optimized files one by one in a controlled manner and then move them off to the new ReFS repository. It might be worth working with this one and not go through many hoops. Is that something that might help you to do it in a controlled manner?
Mike
-
- Novice
- Posts: 7
- Liked: 1 time
- Joined: Aug 31, 2016 1:30 pm
- Full Name: Gary Martin
- Contact:
Re: Transition to ReFS Repositories
Hi Mike,
I can't go that way because of space limitations on my backup target. I don't have the space to move the expanded files (even one by one) because I wont get the advantage of ReFS or deduplication on those files. The idea is to retire the deduplicated repository as the restore points roll off and get the new backups running from ReFS (where a new full is required before the advantages of ReFS and synthetic fulls can be realised).
I can't go that way because of space limitations on my backup target. I don't have the space to move the expanded files (even one by one) because I wont get the advantage of ReFS or deduplication on those files. The idea is to retire the deduplicated repository as the restore points roll off and get the new backups running from ReFS (where a new full is required before the advantages of ReFS and synthetic fulls can be realised).
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Transition to ReFS Repositories
Hi Gary,
a possible workaround is to put into maintenance mode the old node only for the time it takes for a backup job to be executed. This will force that job to create a new active full on the new array (if you have set the proper flag), and since you have data locality in place (the suggested configuration to leverage refs) from that point onwards the chain will be created in the refs volume. When retention kicks in, it will delete the expired restore points from the old node, and at that point you'll be able to decommission it.
a possible workaround is to put into maintenance mode the old node only for the time it takes for a backup job to be executed. This will force that job to create a new active full on the new array (if you have set the proper flag), and since you have data locality in place (the suggested configuration to leverage refs) from that point onwards the chain will be created in the refs volume. When retention kicks in, it will delete the expired restore points from the old node, and at that point you'll be able to decommission it.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: Bing [Bot] and 90 guests