-
- Service Provider
- Posts: 5
- Liked: 1 time
- Joined: Sep 06, 2019 11:30 am
- Full Name: Martin Veng
- Contact:
Feature Request: rebalancing hardened repository
We have a Veeam issue with rebalancing a Scale-Out Backup Repository with hardened (immutable) extents.
Our setup:
Veeam Backup & Replication v12 (SOBR, Data Locality mode)
12 Linux hardened extents with immutability enabled, and backup jobs with long GFS points enabled.
Several extents are nearly full; others (newer) have plenty of free space.
Problem:
We need to rebalance backup data to relieve the full extents.
“Rebalance” does not move immutable restore points, so little or no data is moved in our case.
“Evacuate Backups” (by putting an extent in Maintenance Mode) copies immutable chains instead of moving them, and it only helps if the extent is fully evacuated. Partial evacuation is not possible, and in this case, we do not want to fully empty the extent.
I request that Veeam find a solution for this.
What options do we see, at the current moment?
Move VMs from within a job to a new job. The new job will write data to the extents with the most free space. The old data will remain on the original extent, and it will not be possible to delete it from Veeam because it is immutable due to the GFS retention (e.g., 1 year). We would then have to manually remove the immutable flag on the files, which is manual and risky work, and not an easy way to rebalance data. But if the repo is Veeam LHR, this is not an option.
- Veeam does not have a tool or script to do this.
Our setup:
Veeam Backup & Replication v12 (SOBR, Data Locality mode)
12 Linux hardened extents with immutability enabled, and backup jobs with long GFS points enabled.
Several extents are nearly full; others (newer) have plenty of free space.
Problem:
We need to rebalance backup data to relieve the full extents.
“Rebalance” does not move immutable restore points, so little or no data is moved in our case.
“Evacuate Backups” (by putting an extent in Maintenance Mode) copies immutable chains instead of moving them, and it only helps if the extent is fully evacuated. Partial evacuation is not possible, and in this case, we do not want to fully empty the extent.
I request that Veeam find a solution for this.
What options do we see, at the current moment?
Move VMs from within a job to a new job. The new job will write data to the extents with the most free space. The old data will remain on the original extent, and it will not be possible to delete it from Veeam because it is immutable due to the GFS retention (e.g., 1 year). We would then have to manually remove the immutable flag on the files, which is manual and risky work, and not an easy way to rebalance data. But if the repo is Veeam LHR, this is not an option.
- Veeam does not have a tool or script to do this.
-
- Chief Product Officer
- Posts: 32410
- Liked: 7775 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Feature Request: rebalancing hardened repository
Well, backups on hardened repositories are called immutable for a reason: you cannot do anything with them without local access to the hardened repository, therefore blocking remote attackers right in their tracks.
As soon as we add a function to the product that makes backups not immutable for whatever "good" purpose there is, malicious actors with access to the backup server will be able to trigger the same API directly also for "bad" purposes. Therefore unfortunately there will never be a built-in capability in the product to do this.
As soon as we add a function to the product that makes backups not immutable for whatever "good" purpose there is, malicious actors with access to the backup server will be able to trigger the same API directly also for "bad" purposes. Therefore unfortunately there will never be a built-in capability in the product to do this.
-
- Service Provider
- Posts: 5
- Liked: 1 time
- Joined: Sep 06, 2019 11:30 am
- Full Name: Martin Veng
- Contact:
Re: Feature Request: rebalancing hardened repository
I understand the security concern around immutability.
But today immutability also applies to GFS restore points, which in our case are kept for up to 10 years. This makes rebalancing almost impossible.
Maybe there could be an option where immutability only applies to the last X days (e.g. 14). Then during rebalancing Veeam would just duplicate those recent restore points, and clean them up automatically once the immutability window expires after e.g. 14 days.
This way immutability is still fully enforced where it matters, but long-term GFS data would not block practical repository management.?
But today immutability also applies to GFS restore points, which in our case are kept for up to 10 years. This makes rebalancing almost impossible.
Maybe there could be an option where immutability only applies to the last X days (e.g. 14). Then during rebalancing Veeam would just duplicate those recent restore points, and clean them up automatically once the immutability window expires after e.g. 14 days.
This way immutability is still fully enforced where it matters, but long-term GFS data would not block practical repository management.?
-
- Service Provider
- Posts: 58
- Liked: 33 times
- Joined: Nov 23, 2018 12:23 am
- Full Name: Dion Norman
- Contact:
Re: Feature Request: rebalancing hardened repository
We worked around the forced 'GFS backups are made immutable for the entire duration of their retention policy' on a hardened repository by adding a capacity tier to the SOBR and setting it to only move after 999 days (no copy). This made the GFS backups only follow the hardened repository setting of x days immutability instead of their full lifetime.
We don't keep primary local GFS longer than 2.7 years so the 999 days move setting on the capacity tier works for us with the S3 bucket just staying empty. Not sure if setting via powershell would allow a larger number of days (3650+) possibly for the move setting?
Ultimately though it would be highly welcomed to have an additional setting on the hardened repository settings dialog to toggle enabling immutability for the GFS backup lifetime, and if disabled have the backups follow the recent backups immutability period set for the repository. It could be enabled by default, but let us turn it off if required to avoid needing workarounds.
We don't keep primary local GFS longer than 2.7 years so the 999 days move setting on the capacity tier works for us with the S3 bucket just staying empty. Not sure if setting via powershell would allow a larger number of days (3650+) possibly for the move setting?
Ultimately though it would be highly welcomed to have an additional setting on the hardened repository settings dialog to toggle enabling immutability for the GFS backup lifetime, and if disabled have the backups follow the recent backups immutability period set for the repository. It could be enabled by default, but let us turn it off if required to avoid needing workarounds.
-
- Certified Trainer
- Posts: 1026
- Liked: 448 times
- Joined: Jul 23, 2012 8:16 am
- Full Name: Preben Berg
- Contact:
Re: Feature Request: rebalancing hardened repository
@Gostev If we want to pursue this as a workaround, could you comment if this behaviour is even intentional? It almost seems like a bug to me.
[edit] I think it is explained here, so might be OK?
[edit] I think it is explained here, so might be OK?
Who is online
Users browsing this forum: Bing [Bot], wesmrt and 31 guests