We deployed our offsite backups to Azure and added the Blob capacity storage when update 4 was released. The Azure server performance deployed (lowest cost model) is significantly insufficient to handle the IO loads for the file merges, take 4-6 days to complete. Obviously, this delays data from being sent offsite while the jobs are running. It has been decided to deploy large physical storage arrays to our DR site and add them to the AZURE_SOBR. We will then retire the Azure extent servers and transfer the files to the new extents. We want to maintain the restore point chains for the data currently offsite and in the BLOB.
My concerns are with the procedure steps to implement this. We are using SOBR with Data Locality rules. Once the new extents are added, Veeam would attempt to keep restore point files on the same extent but possibly create new fulls on other extents. This will create IO requirements over the network to copy down from Azure since block cloning would not be available. Also, Veeam would not stop writing new files to the Azure extents until they are in maintenance mode. We all know that files on extents in maintenance mode are not visible for ANY operations. I have posted a question to support about a read only mode extent but they say this is not possible. (Feature update!) Putting an extent into read only mode and letting the normal GFS deletion/BLOB migration processes slowly remove the files from each extent would be the preferred method. It just doesn't work that way.
I was wondering if anyone has experience with this scenario? Do I put extents I want to retire in maintenance mode and run new active fulls? Then, evacuate extents to move remaining file chains. Do I take significant downtime and migrate all extents (120TB @ 30MB/sec = 44 days) from Azure before running jobs again? Do I let Veeam just work and figure it out on it's own? Do I just start a new SOBR but loose the connection to all previous backups (not preferred)?
Current AZURE_SOBR is 3 Azure servers (low disk rate configuration) with 60TB each plus BLOB capacity tier.
Adding 4 physical array extents of 300TB each
10gb network available, concerned about internal routing of traffic
Any help would be appreciated.
Joel
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jul 23, 2019 9:11 pm
- Full Name: Joel Loveless
- Contact:
-
- Product Manager
- Posts: 14824
- Liked: 3075 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Moving AZURE performance tier to physical storage
Hello,
and welcome to the forums.
Just because I'm interested: You have the bad merge performance even with REFS?
Your request about the read-only repository was heard and we are planning it already. That would work exactly for your use case.
The easiest way today is probably to start a new scale out repository. You are not losing any data if you just keep the old VMs & data until retention is over.
Best regards,
Hannes
and welcome to the forums.
Just because I'm interested: You have the bad merge performance even with REFS?
Your request about the read-only repository was heard and we are planning it already. That would work exactly for your use case.
The easiest way today is probably to start a new scale out repository. You are not losing any data if you just keep the old VMs & data until retention is over.
Best regards,
Hannes
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jul 23, 2019 9:11 pm
- Full Name: Joel Loveless
- Contact:
Re: Moving AZURE performance tier to physical storage
My Azure servers were built on 2012 so the disk IO is high without REFS block cloning available. I heard today that the next release will include a function for read only SOBR extents. Sure could use it now for this migration.
-
- Product Manager
- Posts: 14824
- Liked: 3075 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Moving AZURE performance tier to physical storage
yes, that's what I mentionedI heard today that the next release will include a function for read only SOBR extents
So if you have time to wait for the next release... (no, I don't have an exact answer when the release will be )
Who is online
Users browsing this forum: Google [Bot] and 6 guests