- Posts: 5
- Liked: never
- Joined: Feb 18, 2020 6:18 pm
We currently have 9.5 U4 with a SOBR and onsite Storage Grid (S3) tiering from the SOBR. We are going to implement Wasabi into our environment using a VTL for our off site.
We plan to use Forever Forward but I think I am over thinking a majority of this… So
We plan to keep 1 Month on site with a minimum of 90 days off site (Wasabi) using forever forward. Forever forward job is set to 1 full monthly with daily incrementals, is there a way to have the same format into Wasabi? I read I would need to run a virtual full but then I would have multiple fulls in the cloud.
Current FF backup job is 10TB:
day one – full
2-30/31 – incrementals
So if it does the initial full then daily incrementals and a virtual full each week, that would be 40TB in our cloud?
If so, does anyone have recommendations/suggestions on how to achieve this with just 1 full and daily incrementals in the cloud?
Thanks in advance
- Veeam Software
- Posts: 166
- Liked: 89 times
- Joined: Jul 24, 2018 8:38 pm
- Full Name: Stephen Firmes
To achieve what you are looking for you should consider creating a backup copy job which targets the object storage repository.
If/when you upgrade to v10 you can take advantage of our new Copy Mode which copies backup files to the object storage as soon as the backup file is created in the performance tier.
This is where you can find a more detailed explanation of the Copy Mode:
https://helpcenter.veeam.com/docs/backu ... ml?ver=100
This is snippet from that link:
"Once the backup (or backup copy) job is complete, Veeam Backup & Replication initiates a new copy session which simply extracts data blocks and metadata from each new backup file (.vbk, .vib, .vrb) created on any of the extents of your scale-out backup repository and copies these blocks to object storage, thereby making an identical replica of your backup data."
- Product Manager
- Posts: 20108
- Liked: 2210 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
Actually, you can target backup copy job to object storage repository. What you can do is:To achieve what you are looking for you should consider creating a backup copy job which targets the object storage repository.
- Create Scale-Out Backup Repository that has object storage repository as its Capacity Tier
- Enable copy policy
- Point backup copy job to the Scale-Out
This way, you will have exact backup set both locally and on object storage.
No, as Capacity Tier works in ReFS-repository manner - already transferred blocks will not be transferred again.So if it does the initial full then daily incrementals and a virtual full each week, that would be 40TB in our cloud?
Users browsing this forum: No registered users and 11 guests