Hi
I have just had a requirement from the business to implement some ransomware protection for our backups. Currently the backups are configured on VEEAM 10 to use a SOBR with forever incremental and to keep 30 restore points and this going to a performance tier of our local on prem SAN. As all backups are currently kept on prem and only for 30 days this leaves a big gap for us.
We backup around 200 servers and the current backup size is around 60TB with dedupe, I am looking to change this to GFS keeping a monthly full for 12 months, weekly for 4 weeks and 30 restore points with weekly synthetic fulls. I assume this will leave me with the ability to recover from any day in the last 4 weeks and then monthly after that for 1 year and looking at rough calculations I think this will be around 250Tb – 300Tb of data on prem which we have the capacity for. However, this doesn’t feel like its optimal to offload to the capacity tier on AWS S3IA immutable storage.
Am I missing something is there a better way to configure this, as this is to satisfy an audit for ransomware protection and off-site backups is there a way of keeping 1 years’ worth of data on the performance tier (on-prem) but only keeping 30 days on the AWS S3 storage to minimise the cost? To reduce copy time and cost as well would we be better to have forever incremental instead of using GFS?
Are there any recommendations or best practises for this?
Thanks
Rich
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jun 24, 2021 9:28 am
- Full Name: Richard Limb
- Contact:
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Capacity Tier best practise query
Hello,
and welcome to the forums.
AWS S3 infrequent access storage is probably the most expensive way, yes. I covered that in the sticky forum FAQ about API costs: post338749.html#p338749
I would go with V11 and Hardened Repository on-prem (post402811.html#p402811 and https://www.veeam.com/blog/hardened-rep ... iance.html - for the audit). To optimize cloud costs, I would look at alternative cloud providers. I see no way to configure 30 days in capacity tier while maintaining 1 year in performance tier.
Best regards,
Hannes
PS: I also assume that you use REFS / XFS on-prem to save costs https://www.veeam.com/blog/advanced-ref ... suite.html
and welcome to the forums.
AWS S3 infrequent access storage is probably the most expensive way, yes. I covered that in the sticky forum FAQ about API costs: post338749.html#p338749
I would go with V11 and Hardened Repository on-prem (post402811.html#p402811 and https://www.veeam.com/blog/hardened-rep ... iance.html - for the audit). To optimize cloud costs, I would look at alternative cloud providers. I see no way to configure 30 days in capacity tier while maintaining 1 year in performance tier.
Best regards,
Hannes
PS: I also assume that you use REFS / XFS on-prem to save costs https://www.veeam.com/blog/advanced-ref ... suite.html
Who is online
Users browsing this forum: No registered users and 8 guests