Discussions related to using object storage as a backup target.
Post Reply
JacksonWG
Novice
Posts: 5
Liked: never
Joined: Jul 12, 2016 4:23 pm
Full Name: Jackson Blommaert
Contact:

Merging Split Capacity Tier Repos

Post by JacksonWG »

I'm posting in here as a last resort as the Support Ticket I've had open since September has caused more problems than it has solved. Ticket #05677774

I originally opened a ticket to validate my process for migrating data from one S3 repo to another (new one provides object lock, old one did not). Following the advice of the agent, I replaced the old S3 repo with the new one in my SOBR configuration.

The old S3 repo disappeared entirely from VBR and either could not be re-added (or could not be re-scanned, I do not remember). Any restore points that were exclusively on the old S3 repo were still showing as available but this was resolved when I removed any trace of those backups from configuration and rescanned all my available repos.

I was told the only way I'd be able to get access to the restore points in the old S3 bucket was to set up a new VBR instance and add the repo there. I did that and now I have ~200-300 out of ~1500 restore points I want to get offloaded into the new bucket but I have the following problems:

* Because this has gone unresolved for so long, I no longer care about the daily, weekly or monthly restore points
* When I download and copy the backups and scan with my first VBR, the rescan does not recognize any backups
* Support agent suggested I use the Import Backup tool. Manually, for hundreds of backups. The backups are also on an NFS share and Import only works for servers managed by Veeam

The final solution I was provided by the support agent also recommended I just manually upload the restore points to my S3 repo and manually download when we need them, without using VBR. This more than doubles my cloud storage as there are many non-unique objects that would be stored in duplicate. It also means I'd have to provide my lower level support staff direct access to the S3 bucket, which I'd really like to avoid doing. I'd lose a lot of the assurances I'm able to provide management that Veeam provides me about restore point state and integrity.

I think my best solution is to figure out how to generate a .VBM file for the handful of .VBKs I care about, so the rescan can recognize, import and then offload them, but when I asked about this in my case the suggestion was ignored. I did find a PowerShell command for generating .VBMs but it does not have any public facing documentation.

Any help anyone can provide would be appreciated.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Merging Split Capacity Tier Repos

Post by HannesK »

Hello,
I removed your support contract number and replaced it by the support case number.
for migrating data from one S3 repo to another (new one provides object lock, old one did not)
that requires downloading and uploading the data again via VBR. There is no direct way of migration as the user guide states. Download with VBR and upload with VBR is the only supported way. The second thing is, that versioning / object lock has to be enabled before the first data is moved to that bucket. And Veeam hast move it into that bucket.
I did find a PowerShell command for generating .VBMs but it does not have any public facing documentation
correct. that tool has no public documentation, because it's not an official tool. It can be used and questions about it can be asked on the (PowerShell) forums. Overall, the script should do the job of creating VBM files that can be imported then. Overall, I remember the script usage as straight forward and if there are questions, I suggest to place the problem directly in the PowerShell forum.

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests