I'm under the impression that the "only unique blocks are offloaded" applies to an entire 1to1 SOBR to s3 bucket relationship? Or is this limited to "within a backup job"?
Example: I have vm A in a backup job. 1TB offloaded to object storage bucket. Now as a test I create an entirely new backup job and I enter in the very same vm. Will veeam still scan the index and sobr and know that nothing needs uploaded (other than tiny changes) Or is this still all "within job only"? I'm thinking of an issue where someone mistakenly puts the same vm in more than one job and then we are paying for wasted space etc.
If its within job only, then I guess I can still have several performance tier extents running jobs and taking fulls when needed but as long as the actual backup job always stays there in my veeam server, then only unique will upload? So for example if I got completely new storage and had to start over, I just make sure the backup job itself stays put and I only adjust the "repository" within that job? (instead of making an entirely new job)
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: sobr storage refreshes and such
There's no global dedupe, it's forever-incremental within the same backup.
If you replace storage behind the Performance Tier, then SOBR rescan will be required. This process will download stubs of missing backup files from the Capacity Tier first. With those, the job will then be able to continue. No updates to backup job should be required, since the target SOBR remains the same.
If you replace storage behind the Performance Tier, then SOBR rescan will be required. This process will download stubs of missing backup files from the Capacity Tier first. With those, the job will then be able to continue. No updates to backup job should be required, since the target SOBR remains the same.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: sobr storage refreshes and such
Thank you!
One more clarification. If you have a sobr and it decides to place another full on another extent. Since that is still part of the same backup job then the dedupe will still apply and not go to object storage correct?
I know that is true if all on same extent but wondered about cross extent.
One more clarification. If you have a sobr and it decides to place another full on another extent. Since that is still part of the same backup job then the dedupe will still apply and not go to object storage correct?
I know that is true if all on same extent but wondered about cross extent.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: sobr storage refreshes and such
Correct. Keep in mind that on a higher level, SOBR is still a single repository, even if it uses a bunch of different storage devices behind it. The whole goal of creating SOBR was to virtualize those multiple storage devices into the single storage pool, so that customers don't have to think how to spread their backups most optimally across all the storage devices they have, balance job sizes to ensure each job fits its designated storage device, etc.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: sobr storage refreshes and such
Excellent, thanks again.
Who is online
Users browsing this forum: No registered users and 9 guests