Comprehensive data protection for all workloads
Post Reply
salyerma
Influencer
Posts: 19
Liked: 1 time
Joined: Apr 06, 2017 7:43 pm
Full Name: Mark Salyer
Contact:

SOBR unavailable during Evacuation?

Post by salyerma »

I am trying to migrate several TB of archives from a DataDomain to a new Storeonce. The only method that I have found so far is the evacuation method of a SOBR. Going by a post on these forums, I was under the impression that I could add both the DataDomain and the Storeonce to a SOBR then put the DataDomain extent into maintenance mode, then evacuate to the Storeonce. Problem is, it is taking foreever. Like weeks. No problem, but then I found that the *entire* SOBR was unavailable during an evacuation.

I stopped the evacuation and my backup copy jobs started working again. I am running v10.

Is this a new feature of V10? I cannot be without my backups for 45-60 days.
Gostev
Chief Product Officer
Posts: 31816
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR unavailable during Evacuation?

Post by Gostev »

No, there's no such feature in v10 to make the entire SOBR unavailable for the duration of an evacuation.
salyerma
Influencer
Posts: 19
Liked: 1 time
Joined: Apr 06, 2017 7:43 pm
Full Name: Mark Salyer
Contact:

Re: SOBR unavailable during Evacuation?

Post by salyerma »

I have a support ticket open. They said I could not use the SoBR while an evacuation is going on. Is that not true?
Gostev
Chief Product Officer
Posts: 31816
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR unavailable during Evacuation?

Post by Gostev »

I just know that we did not introduce any similar new features in v10 for sure, because I personally worked through the list of all changes when creating the What's New document.

But I've checked the documentation now, and I don't see this limitation documented there either. Moreover, the step-by-step process in the User Guide specifically recommends to stop and disable only those jobs which target the evacuated extent - but not other jobs.
DonZoomik
Service Provider
Posts: 372
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: SOBR unavailable during Evacuation?

Post by DonZoomik »

It blocks any job that has active chain on repository being evacuated. IMHO it makes evacuate function a bit useless. It'd be far more useful if it'd stay available enough for at least chains to continue to other extents. Currently it just blocks any access. So realistically you have to do active full to perform the evacuation.
Any evac I've done has been manual data copy with data resyncs, this way you can keep data available and do the copies in manageable chunks between sessions. With dedupe appliances it's probably not possible.
Gostev
Chief Product Officer
Posts: 31816
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR unavailable during Evacuation?

Post by Gostev »

Right, that's more like it, this makes sense.

And sure, it would be nice to have, but it is be pretty hard to orchestrate... evacuation process is totally separate from jobs, and them working alongside one another may result in the required file removed from the extent just as the job needs to read it. That's a lot of extra logic to write and so bugs to work through. And considering how rare evacuations happen, it's really hard to justify having developers work on these enhancements.

However, in many cases, a better option will be to Seal the extent instead, and just let backups on it expire and get removed by the retention policy.
FedericoV
Technology Partner
Posts: 36
Liked: 38 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: SOBR unavailable during Evacuation?

Post by FedericoV » 2 people like this post

I guess I have a solution.
  • You do not need to stop your production backup job
  • You do not need additional capacity
  • you can make the migration in one single run
  • you have a simple way for deleting expired restore points from the migrated data
I have tested it on a small config and it works, if you contact me, I can provide more details.

PART-1
1) Create a new Catalyst Store "CS1" on the destination StoreOnce
2) create a new Veeam Backup Repository "BR1-CS1" on the new StoreOnce
3) create a second BR "BR2-CS1" on the same CS. Having 2 BRs inside the same CS provides global deduplication across the 2 BRs.
Please note that even if VBR lets you doing this on the GUI, Veeam suggest using the CLI, otherwise there might be internal issues (Veeam: please update the GUI process to make the same job of the CLI)
Open a PowerShell and give the following commands:

Code: Select all

Add-VBRBackupRepository -Name BR2-CS1 -Folder storeonce://your-SO-short-name:CS1@/Sub-1 -Type HPStoreOnceIntegration -StoreOnceServerName your-SO-FQDN -UserName veeam -Password veeam

$repository = Get-VBRBackupRepository -Name " BR2-CS1"

set-VBRBackupRepository -Repository $repository -DataRateLimit 123 -LimitDataRate:$false -MaxConcurrentJobs 12 -LimitConcurrentJobs:$false
5) Edit your job pointing to DataDomain “Job-DD” and stop any scheduled activity
6) Clone it to Job-SO”, set its repository as BR1-CS1 and enable the scheduling
This is your new production job. On its first execution, it will make a full, consider its workload in your planning.

PART-2
Now that your production is running backup to the new Storage, we need to perform the migration

7) edit BD-DD and “Limit maximum concurrent task to:” a reasonable value (10) to avoid that the migration can creates too much stress to your production.

8 ) Create a new SOBR “migration-SOBR”
Add to the SOBR the old BR pointing to DataDomain. Let’s call it “BD-DD”
Add to the SOBR the new BR2 pointing to SO.
NOTE: VBR will automatically update the Job-DD to use the new “migration-SOBR”

9) Put BR-DD in maintenance mode and the “evacuate”
It will not impact your production backup because it runs on the cloned job
NOTE: the migrated files, as well as the new backup are written to the same StoreOnce Catalyst Store and therefore they have global deduplication

PART-3
Your production is running backup,
Your migration is complete
Now you have to expire the old RPs that are going out of retention (It took me a while to find a way).

10) On vCenter create an empty VM folder “Empty-folder”
11) Edit the old Job-DD, remove all the VMs and add the “Empty-folder”
12) Time by time edit Job-DD and reduce the number of RPs in its configuration,
Then run the job manually. The job will not run any backup because its list is empty, but it will execute its housekeeping deleting the RPs no more in retention. Yes, even if the VM list is empty, the job still remember its past.
Remember to do the same with the GFS if you have it. I have tested the GFS as well: I have reduced the Weekly RPs and, at the next run, the job deleted the expired ones as expected.
Yes, it still requires some manual activity, but I guess it is doable.

Most customer do not make a migration, but they wait until all the RPs expire from the old storage. Sometime this is not possible because of longer retention. For those cases, I hope the above process is a valid solution.

Long text... thanks for reading it.
Ping me for more info Federico.venier@hpe.com

Thanks
Federico
Post Reply

Who is online

Users browsing this forum: AdsBot [Google], Semrush [Bot] and 63 guests