Using object storage as a backup target
- Posts: 1
- Liked: never
- Joined: Oct 25, 2021 10:45 pm
- Full Name: chris kilkenny
Ive been struggling to get good information on a process, if applicable, to move a vm with local and object storage on a scale out repository from one backup job to another. I have a client that wants to pull a vm out of a backup job, and create a new backup job with a different retention policy. The current job, has probably 30 different vm's on it, backing up to a SOBR locally and also offloading to azure object storage. It seems like veeam likes to keep vm's tied to a backup chain and backup job, so moving them between jobs always seems more complicated. Has anyone had any success doing so and what steps did you do to get this done? Thanks
- Veeam Software
- Posts: 3868
- Liked: 1249 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: St. Gallen, Switzerland
I have read somewhere here in the forums, that moving or deleting restore points in the performance tier will led to unexpected issues in the offloading process.
If you have configured „per machine backup files“, you could move the restore points to a new folder in the performance tier, but I would ask veeam support, if it is ok to do so. And if there is some process to follow. If you have multiple extend for an example, you need to have the metadata file on each extend.
Moving „files“ to a new job in the capacity tier is not possible. Veeam will upload the entire vm again with a new job.
Product Management Analyst @ Veeam Software
- Product Manager
- Posts: 19694
- Liked: 2095 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
The described scenario is not possible at the moment - you cannot add a VM to a new job and make use of data the job stores in Performance and Capacity Tiers.
We are aware of this requirement (VM moving between existing jobs and repositories) and got it tracked internally already.
Users browsing this forum: No registered users and 2 guests