-
mamosorre84
- Veeam Vanguard
- Posts: 383
- Liked: 41 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
SOBR Capacity Imported + Map Backup
Hi,
I have an object storage imported as capacity tier (it was part of a SOBR in an old VBR server).
I want to map all the backup in this repo to a dummy job in order to make them orphaned and apply the background retention.
The problem is that I don't see that repo when I get the list under "map backup", I suppose I cannot use a capacity for this task.
Do I have to put in a SOBR? Can I create a new repo/performance tier and add both to a new SOBR?
Regards
Marco S.
I have an object storage imported as capacity tier (it was part of a SOBR in an old VBR server).
I want to map all the backup in this repo to a dummy job in order to make them orphaned and apply the background retention.
The problem is that I don't see that repo when I get the list under "map backup", I suppose I cannot use a capacity for this task.
Do I have to put in a SOBR? Can I create a new repo/performance tier and add both to a new SOBR?
Regards
Marco S.
-
Mildur
- Product Manager
- Posts: 11716
- Liked: 3295 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: SOBR Capacity Imported + Map Backup
Hi Marco
Performance Tier and Capacity Tier have a different data structure on Object Storage and are not compatible to each other.
Therefore you cannot use the map option.
--> Performance Tier
--> Capacity Tier
To apply background retention jobs, the Capacity Tier bucket must be part of a SOBR.
Keep in mind, that the backups still have to be "mapped" to a backup job, or background retention will not remove from the capacity tier.
Background Retention - Considerations
Fabian
Performance Tier and Capacity Tier have a different data structure on Object Storage and are not compatible to each other.
Therefore you cannot use the map option.
--> Performance Tier
--> Capacity Tier
To apply background retention jobs, the Capacity Tier bucket must be part of a SOBR.
Keep in mind, that the backups still have to be "mapped" to a backup job, or background retention will not remove from the capacity tier.
Background Retention - Considerations
Best,[For backups stored in the capacity tier] Background retention job does not delete capacity tier copies of backup data directly. However, if background retention removes local copies of backups, they may also be marked for removal on the capacity tier. In such a case, cleanup during the next SOBR offloading session will remove them from the capacity tier.
Fabian
Product Management Analyst @ Veeam Software
-
mamosorre84
- Veeam Vanguard
- Posts: 383
- Liked: 41 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: SOBR Capacity Imported + Map Backup
Hi Fabian,
thank you for your fast reply!
The problem is that I don't have the original performance tier, so I guess that trick won't work anyway, right?
Marco S.
thank you for your fast reply!
The problem is that I don't have the original performance tier, so I guess that trick won't work anyway, right?
Marco S.
-
Mildur
- Product Manager
- Posts: 11716
- Liked: 3295 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: SOBR Capacity Imported + Map Backup
Hi Marco
I didn't tested it myself, but can you try:
1.) Right click on the capacity tier object storage and detach repository. That will remove metadata from the imported backups from the configuration database
2.) Create a new repository for your performance tier (doesn't need backup storage, no backup files will be stored here)
3.) Create a SOBR with the new performance tier and old capacity tier. The Capacity Tier step will inform you, that there are existing backups. Select Ok to import them.
5.) Create a dummy backup Job with the necessary retention and map it with the backup.
Best,
Fabian
I didn't tested it myself, but can you try:
1.) Right click on the capacity tier object storage and detach repository. That will remove metadata from the imported backups from the configuration database
2.) Create a new repository for your performance tier (doesn't need backup storage, no backup files will be stored here)
3.) Create a SOBR with the new performance tier and old capacity tier. The Capacity Tier step will inform you, that there are existing backups. Select Ok to import them.
5.) Create a dummy backup Job with the necessary retention and map it with the backup.
Best,
Fabian
Product Management Analyst @ Veeam Software
-
mamosorre84
- Veeam Vanguard
- Posts: 383
- Liked: 41 times
- Joined: Oct 24, 2016 3:56 pm
- Full Name: Marco Sorrentino
- Location: Ancona - Italy
- Contact:
Re: SOBR Capacity Imported + Map Backup
Hi Fabian,
the trick works, but partially: now I see the SOBR in the job wizard, but when I try to map a backup I get an error regarding the chain format (they have "standard" per-vm chain).
Is it possbile to upgrade it for imported backup? I don't see the option when I select a job..
Ps: old VBR was v12, the new one is a v13 VSA.
Thank you for your patience
Marco S.
the trick works, but partially: now I see the SOBR in the job wizard, but when I try to map a backup I get an error regarding the chain format (they have "standard" per-vm chain).
Is it possbile to upgrade it for imported backup? I don't see the option when I select a job..
Ps: old VBR was v12, the new one is a v13 VSA.
Thank you for your patience
Marco S.
-
Mildur
- Product Manager
- Posts: 11716
- Liked: 3295 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: SOBR Capacity Imported + Map Backup
Hi Marco,
Unfortunately, that is no longer possible, because v13 does not allow you to create new jobs in the old format (legacy per-machine backup format with a single metadata file).
A downgrade or reconnect of the backup server for this bucket is also not recommended, because the repository may already contain metadata in the v13 format.
I see the following options:
- Delete the entire bucket at once
- wait until you no longer need the backups and delete the bucket then
- Download the backups back to the Performance Tier. Make sure the repository supports Fast Clone, so multiple full backups will not occupy the full space. After that, you can manually remove older chains over time from the disk. Keep in mind, that a full download will use API calls and Egress Traffic. This can be expensive if you use a cloud services with API/egress traffic Fees.
Best,
Fabian
Unfortunately, that is no longer possible, because v13 does not allow you to create new jobs in the old format (legacy per-machine backup format with a single metadata file).
A downgrade or reconnect of the backup server for this bucket is also not recommended, because the repository may already contain metadata in the v13 format.
I see the following options:
- Delete the entire bucket at once
- wait until you no longer need the backups and delete the bucket then
- Download the backups back to the Performance Tier. Make sure the repository supports Fast Clone, so multiple full backups will not occupy the full space. After that, you can manually remove older chains over time from the disk. Keep in mind, that a full download will use API calls and Egress Traffic. This can be expensive if you use a cloud services with API/egress traffic Fees.
Best,
Fabian
Product Management Analyst @ Veeam Software
Who is online
Users browsing this forum: No registered users and 1 guest