Discussions related to using object storage as a backup target.
Post Reply
steri
Influencer
Posts: 24
Liked: 3 times
Joined: Aug 06, 2015 4:34 pm
Contact:

Scale Out Repository with Object Storage

Post by steri »

Hi @ll,

I have a question for design a Scale Out Repo and hope some expert can help me to understand this.

All Requirements are
1. Backups (daily inc., and weekly synth. Fulls) on local onPrem Storage with retention 14 days only.
2. Copy of this backup files to an object storage from type hot/cold and additional copy to an local dedup storage.
3. Additional Retention on object- and local dedup storage of 4 weekly and 24 monthly Backups and yearly Backups should direct placed to an object storage from type archiv.

So I have the requirement to store Backups daily ink and weekly synth. Fulls to an local Storage for 14 days. I mean this is my scale out repo performance tier with data locality.
After create the ink or full it must be copy to Object Storage. I can set this in Scale Out Repo wizard with the capacity tier and option copy backups as soon as they are created.
In the backupjob i will set the primary to the scaleout as target with retention of 14 days. A secondary copy job to the dedup storage. Retention in the copy job 14 days too and additional gfs with 4x weekly 24x monthly
Then I have Requirement 1 and 2 done.
Is this correct ?

Requirement 3 i have no idea yet to configure this.

From the local dedup appliance the backup files must be sync to an cloud appliance too. I can do it with the dedup appliance semself. Can I configure this over Veeam instead use of the dedup appliance too?

Please help me with any ideas and thanks for that

Regards Christian
HannesK
Product Manager
Posts: 14301
Liked: 2879 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Scale Out Repository with Object Storage

Post by HannesK »

Hello,
with scale-out-repositories, the retention with the "copy" setting are the same on performance tier and capacity tier. As of today, there is no way to change that. A copy job is not needed to copy data to object storage. It's just a checkbox.

To have different retention setting for object storage, you need to wait for V12 (planned for 2022). There you could use again backup copy jobs to have a 14 days / 4 weeks / 24 months retention directly to object storage.

If your dedupe appliance can create a 1:1 copy in the cloud, then you could use that functionality today with the backup copy job, yes. Just make sure, that 100% of the data is available on-prem on the dedupe appliance. Another alternative is having the full retention also on the performance tier (directly in the backup job). Then the copy to object storage would be the same.

Data locality is irrelevant from a retention perspective. But if you use REFS / XFS filesystem (recommended), then data locality is strongly recommended to be able to use fastclone

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: veremin and 13 guests