Hi all,
Apologies if this is clear somewhere, I've been reading lots of articles and forum posts, but just can't figure it out.
We currently have Veeam B&R backing up on-prem (mostly) virtual servers to multiple QNAP NAS devices. Auditors have told us we need an air-gapped solution to protect against ransomware, so we're looking to add S3 storage from Backblaze.
Due to the cost of the S3 storage and already having multiple QNAPs, we want to only keep fairly recent backups on S3. We do want GFS, keeping at least 1 year, possibly more, but want the older ones to be on the QNAPs only, not S3. I'd been looking at daily backups for 2 weeks, keeping synthetic fulls from Sundays for a further 8 weeks, then the 1st Sunday of each month for a year.
At the moment, I've been given no figure on how long we want the copies on S3 to be immutable for, but a bit of reading suggests 90 days is the maximum using the GUI. I don't think we'd want to keep anything on S3 beyond that.
As far as I can tell, the only way to copy the data to S3 is by using a scaled-out backup repository. I've tested that with no issues, but the settings look like they're designed for the S3 object storage to be used for longer term, with the option to move data to the capacity tier when it reaches a certain age. That's the opposite of what I'm trying to achieve, where the older ones are deleted from S3 but kept on-prem.
The only way that I can think of to do it is to set up a backup copy job with different retention settings, but that would need a SOBR with its own on-prem storage, meaning we'd have to keep 2 copies on-prem.At a push, I could maybe dedicate 1 QNAP at our primary site for this, but would prefer to avoid this, if possible. Or maybe a single short-term job to SOBR, with a copy job to a QNAP, but I think that still needs 2 on-prem copies.
Am I missing something obvious? Is the whole plan to keep older backups on-prem, rather than in S3 storage flawed?
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Oct 09, 2019 10:54 am
- Full Name: james Allcock
- Contact:
-
- Product Manager
- Posts: 14836
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: How to Implement GFS with S3
Hello,
you are asking for the opposite of what the software is designed to do. You could probably somehow do that with helper jobs (you mentioned them), but I recommend to use the software in a way we designed it.
If you only want to keep 14 daily backups, then 14 days immutability is the maximum that could work without giving you error messages every day (when it tries to delete the 14th day).
I don't know how many years you store your backups, but maybe Azure Archive tier or AWS Glacier Deep Archive could also be the cheapest option. In general, you are right, Backblaze is much cheaper than AWS / Azure.
Best regards,
Hannes
you are asking for the opposite of what the software is designed to do. You could probably somehow do that with helper jobs (you mentioned them), but I recommend to use the software in a way we designed it.
If you only want to keep 14 daily backups, then 14 days immutability is the maximum that could work without giving you error messages every day (when it tries to delete the 14th day).
I assume that you believe a GFS restore point is taking up the full space. So storing data in object storage is probably much cheaper than you believe. It's incremental forever. See the sticky FAQ: post338749.html#p338749Am I missing something obvious?
I don't know how many years you store your backups, but maybe Azure Archive tier or AWS Glacier Deep Archive could also be the cheapest option. In general, you are right, Backblaze is much cheaper than AWS / Azure.
Best regards,
Hannes
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Oct 09, 2019 10:54 am
- Full Name: james Allcock
- Contact:
Re: How to Implement GFS with S3
Thanks Hannes, that all makes perfect sense.
The powers that be here decided to purchase more QNAP devices recently, so I think I'm just trying to justify their existence. I'll be able to use this information to explain to management why we have (hopefully only slightly) higher costs for cloud storage.
I did know about forever incremental, but think I was thrown by seeing that immutability would increase costs. I'm only just getting to grips with ReFS and change blocks on-prem. Up to now, our on-prem synthetic backups have taken the full amount of disk space for each. I'd been thinking cloud storage would do the same, but realise that isn't the case.
The powers that be here decided to purchase more QNAP devices recently, so I think I'm just trying to justify their existence. I'll be able to use this information to explain to management why we have (hopefully only slightly) higher costs for cloud storage.
I did know about forever incremental, but think I was thrown by seeing that immutability would increase costs. I'm only just getting to grips with ReFS and change blocks on-prem. Up to now, our on-prem synthetic backups have taken the full amount of disk space for each. I'd been thinking cloud storage would do the same, but realise that isn't the case.
Who is online
Users browsing this forum: No registered users and 20 guests