We've recently hopped on board the S3 object lock train, and we're having some trouble with the fact that S3 backup repositories can't be used without SOBR. In our current configuration, we have a Linux XFS server as our primary backup storage, non-SOBR (currently, but could easily be converted). Using XFS linked clone blocks, we actually converted to "all restore points are fulls" when we moved to V11, as our previous configuration (forward incremental with transform and rollback conversion) got deprecated.
So far, all the methods we've seen that Veeam supports for using the S3 come with downsides:
1) Backup copy job to SOBR (S3 capacity tier) - We must have 2x our local backup space to handle the "active" backup chain on SOBR (even if we set the capacity tier "move" setting to 0 days, it will never move the last remaining restore point)
2) Primary copy job to SOBR (S3 capacity tier) - This works somewhat better, but we still end up mvoing ever backup point that is older than X days (whatever is set) to S3. We only want to move some backups (i.e. 1 per week, etc). Also, this defeats the point of using Object Lock as a anti-ransomware protection, as we're only protecting our oldest backups.
Is there any chance that more options are coming soon to using S3 repositories? Such as a system that could copy GFS-tagged restore points to capacity? Or do we just basically need to roll our own using powershell here?
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Apr 24, 2012 6:50 pm
- Full Name: Will Turner
- Contact:
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: Using SOBR, copy only GFS restore points to capacity?
Why not just use the Copy option for your short term chain? I get you're wanting some space reclamation locally, but with XFS and reflinks you should have ideal space savings.
For your second point, I don't get it quite -- with reflinks and GFS, you'll end up with a copy of the short term backups immediately, and once they age-out, they'll be moved. With reflinks, you're already maximizing the space savings for a given chain, so what's the difficulty?
If you just want to move some backups from the inactive chain, this can be done manually: https://helpcenter.veeam.com/docs/backu ... ml?ver=110
There's even a powershell option for it:
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
For the GFS tagging question you have, you can do this today; the default behavior I see as the set-and-forget style, while a powershell script for managing backup life-cycles is for power users. For at least a few of my clients, we have a life-cycle script that tracks critical VMs and ensures they get to tape and then to Capacity Tier and generates a report accordingly. If you have a need for such granularity, I think it's best to roll your own here. Just personally, building up a workflow for such a script is pretty simple and if you have a little experience with powershell it's a decent afternoon to get a skeleton structure and then just building up some prettiness for later on.
If I remember correctly, the VeeamOn presentation had some topics on improvements for S3.
For your second point, I don't get it quite -- with reflinks and GFS, you'll end up with a copy of the short term backups immediately, and once they age-out, they'll be moved. With reflinks, you're already maximizing the space savings for a given chain, so what's the difficulty?
If you just want to move some backups from the inactive chain, this can be done manually: https://helpcenter.veeam.com/docs/backu ... ml?ver=110
There's even a powershell option for it:
https://helpcenter.veeam.com/docs/backu ... ml?ver=110
For the GFS tagging question you have, you can do this today; the default behavior I see as the set-and-forget style, while a powershell script for managing backup life-cycles is for power users. For at least a few of my clients, we have a life-cycle script that tracks critical VMs and ensures they get to tape and then to Capacity Tier and generates a report accordingly. If you have a need for such granularity, I think it's best to roll your own here. Just personally, building up a workflow for such a script is pretty simple and if you have a little experience with powershell it's a decent afternoon to get a skeleton structure and then just building up some prettiness for later on.
If I remember correctly, the VeeamOn presentation had some topics on improvements for S3.
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Apr 24, 2012 6:50 pm
- Full Name: Will Turner
- Contact:
Re: Using SOBR, copy only GFS restore points to capacity?
Thanks for the info. We would be fine using powershell automation for this if Veeam still handled the old restore point cleanup. Does anyone know if that's the case?
Also, I'm only seeing powershell cmdlets for moving data, nothing for copying. Maybe I'm just searching poorly here?
Also, I'm only seeing powershell cmdlets for moving data, nothing for copying. Maybe I'm just searching poorly here?
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Using SOBR, copy only GFS restore points to capacity?
I'm not sure whether I completely got your request, so can you elaborate a bit why a Scale-Out Backup Repository with copy policy configured does not answer your requirements? The storage consumption on Performance Tier should not be a problem here, since you're using XFS repository, the same applies to Capacity Tier - sure the additional restore points (besides GFS) will be copied to object storage, but they should not occupy a lot of space thanks be to forever incremental nature of Capacity Tier. Thanks!
Who is online
Users browsing this forum: No registered users and 7 guests