-
- Lurker
- Posts: 2
- Liked: never
- Joined: Feb 12, 2019 2:20 am
- Contact:
Veeam backup keeps hitting storage limit
Can Veeam backup be set to work this way.
Set external NAS as backup location, set to store 30 days worth of backups.
Set a hard limit on the storage space eg. the NAS has 10TB, set Veeam to use 9TB.
If I can successfully store 30 days worth of backups, great. If I don't have enough space then remove the old backups until there's enough space to keep backing up. So I've set it to do 30 days, but say it can only fit 25 days on there, so keep deleting by oldest days until I can get another daily backup done in the 9TB space. Obviously send warnings that it had to delete backups and couldn't backup all 30 days.
Set external NAS as backup location, set to store 30 days worth of backups.
Set a hard limit on the storage space eg. the NAS has 10TB, set Veeam to use 9TB.
If I can successfully store 30 days worth of backups, great. If I don't have enough space then remove the old backups until there's enough space to keep backing up. So I've set it to do 30 days, but say it can only fit 25 days on there, so keep deleting by oldest days until I can get another daily backup done in the 9TB space. Obviously send warnings that it had to delete backups and couldn't backup all 30 days.
-
- Product Manager
- Posts: 8191
- Liked: 1322 times
- Joined: Feb 08, 2013 3:08 pm
- Full Name: Mike Resseler
- Location: Belgium
- Contact:
Re: Veeam backup keeps hitting storage limit
Hi Matt,
First: Welcome to the forums!
Second:You can use an external NAS as backup location and then configure one or multiple jobs with a retention of 30 (taking into account that you have one retention point per day). We do not have the possibility to set a hard limit on the storage space. However, in your example, a possibility would be to use the SOBR (Scale-out Backup repository) solution and move data out to a capacity tier (object storage) when your NAS hits the limit. Although that might be not what you actually want.
To be honest, I am a bit scared of the approach that you propose. In the end, you probably have agreed on a "SLA" with your management and if you need to keep 30 days then your solution could get you into trouble when there are only 20 and someone needs a file from 25 days ago?
First: Welcome to the forums!
Second:You can use an external NAS as backup location and then configure one or multiple jobs with a retention of 30 (taking into account that you have one retention point per day). We do not have the possibility to set a hard limit on the storage space. However, in your example, a possibility would be to use the SOBR (Scale-out Backup repository) solution and move data out to a capacity tier (object storage) when your NAS hits the limit. Although that might be not what you actually want.
To be honest, I am a bit scared of the approach that you propose. In the end, you probably have agreed on a "SLA" with your management and if you need to keep 30 days then your solution could get you into trouble when there are only 20 and someone needs a file from 25 days ago?
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam backup keeps hitting storage limit
Hello,
and welcome to the forums.
A quota for backup like you suggested is not possible out of the box because that would lead to unpredictable retentions which would e.g. violate compliance rules. Second we would need to know about the backup size before which is technically not possible because we cannot predict data reduction rates.
We send warnings per default (at 10% free space) if backup space is getting low. You could decrease the warning value to be informed earlier (general options -> notifications)
You might be able to build something like that with some Powershell scripting. Just reduce the number of restore points for a reverse incremental or forever forward incremental backup chain depending on the "free space" scripted and you should be able to achieve something similar.
Best regards,
Hannes
and welcome to the forums.
A quota for backup like you suggested is not possible out of the box because that would lead to unpredictable retentions which would e.g. violate compliance rules. Second we would need to know about the backup size before which is technically not possible because we cannot predict data reduction rates.
We send warnings per default (at 10% free space) if backup space is getting low. You could decrease the warning value to be informed earlier (general options -> notifications)
You might be able to build something like that with some Powershell scripting. Just reduce the number of restore points for a reverse incremental or forever forward incremental backup chain depending on the "free space" scripted and you should be able to achieve something similar.
Best regards,
Hannes
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Feb 12, 2019 2:20 am
- Contact:
Re: Veeam backup keeps hitting storage limit
Thanks for the replies. From a data safety/compliance standpoint I don't agree that when storage space runs out, it's better for the backups to error and just stop working, as opposed to produce a warning, reduce the retention amounts and continue to work, but thanks for letting us know Veeam Backup can't do that.
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam backup keeps hitting storage limit
hmm, maybe something was lost in translation
that's what we do. If there is no space left, the job fails. We don't reduce restore points. We produce warnings before so that the customer has a chance to fix that earlier.it's better for the backups to error and just stop working,
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Apr 13, 2023 6:30 pm
- Full Name: Tony
- Contact:
Re: Veeam backup keeps hitting storage limit
Hi,
I waited 4 years! lol To hopefully get a better answer than Matt did above back in 2019!
Did we make any progress on this Veeam?
I am coming from Windows server backup, 15 years old! Which can save 500 backups of an entire server to a 2tb external hardrive forever and never run out of space or need any maintenance in any way.
I am running Veeam Windows Agent 6.0 on more than 10 servers and today had 3 of them just decide to say out of space and notify me. I opened a ticket and did a screen share with a low level tech to look at the issue, he is reaching out to a higher level tech and we are waiting for their response.
Doesn't look promising! I would be amazed if there is no configuration scenario in Veeam that like an NVR would consume the older backups in order to continue running and providing a constant quantity of backups based on size of the backup and size of the destination equation.
Ya just can't stop, and say format, start and stop and say format over and over, there is no such thing in small to medium scenarios of infinite destination space! not realistic!
Best,
Ton
I waited 4 years! lol To hopefully get a better answer than Matt did above back in 2019!
Did we make any progress on this Veeam?
I am coming from Windows server backup, 15 years old! Which can save 500 backups of an entire server to a 2tb external hardrive forever and never run out of space or need any maintenance in any way.
I am running Veeam Windows Agent 6.0 on more than 10 servers and today had 3 of them just decide to say out of space and notify me. I opened a ticket and did a screen share with a low level tech to look at the issue, he is reaching out to a higher level tech and we are waiting for their response.
Doesn't look promising! I would be amazed if there is no configuration scenario in Veeam that like an NVR would consume the older backups in order to continue running and providing a constant quantity of backups based on size of the backup and size of the destination equation.
Ya just can't stop, and say format, start and stop and say format over and over, there is no such thing in small to medium scenarios of infinite destination space! not realistic!
Best,
Ton
-
- Veeam Software
- Posts: 2123
- Liked: 513 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Veeam backup keeps hitting storage limit
Hi @TonyAtlaz,
Maybe I'm not quite getting what your full request is, because Veeam's retention should handle this just fine. It's been awhile since I worked with Windows Server Backup, but if I remember right, since 2008 even it's using the same retention algorithm (more or less) that Veeam does. Where I guess there must be a difference is it must estimate the size of the forthcoming backup and applies retention before the backup runs if there isn't enough space on the target storage.
The difference is not possible with Veeam right now (to do retention prior to the actual backup), but I guess from my perspective, such a method makes guaranteeing your SLAs and required RPO difficult. Similarly, I imagine this gets more complicated when you consider GFS, which as far as I know Windows Server Backup doesn't really support.
Your comment on "format, start and stop" is not clear for me, can you elaborate? A format shouldn't be required just because you hit disk capacity; typically in support we just would recommend remove the newest few backups manually, rescan the repository, and then adjust retention accordingly. With VeeamOne or Powershell/RestAPI reporting, understanding your capacity planning is fairly straight forward, and you can set your retention on the job according to your capacity needs, and then just set and forget (barring any unexpectedly larger backups, which probably should be reported on and investigated, at least I would want to be understanding this as much as I can where the extra data came from).
I guess it might just be different perspectives on the backup management here, but incremental backup sizes are usually fairly predictable and anything that falls outside of that would be something I want to understand what is introducing significant changes. For me, once I understand the capacity planning required for the workloads I need to protect, I'd want to have a scheme that I simply set up once and know what my RPO is and not have a process remove restore points before the retention period I defined is up. There's a little extra work I suppose in the planning stage, but I can rest easier knowing that the RPO I am aiming for will be met instead of wondering if a point within my defined RPO is going to be there or not.
Maybe I'm not quite getting what your full request is, because Veeam's retention should handle this just fine. It's been awhile since I worked with Windows Server Backup, but if I remember right, since 2008 even it's using the same retention algorithm (more or less) that Veeam does. Where I guess there must be a difference is it must estimate the size of the forthcoming backup and applies retention before the backup runs if there isn't enough space on the target storage.
The difference is not possible with Veeam right now (to do retention prior to the actual backup), but I guess from my perspective, such a method makes guaranteeing your SLAs and required RPO difficult. Similarly, I imagine this gets more complicated when you consider GFS, which as far as I know Windows Server Backup doesn't really support.
Your comment on "format, start and stop" is not clear for me, can you elaborate? A format shouldn't be required just because you hit disk capacity; typically in support we just would recommend remove the newest few backups manually, rescan the repository, and then adjust retention accordingly. With VeeamOne or Powershell/RestAPI reporting, understanding your capacity planning is fairly straight forward, and you can set your retention on the job according to your capacity needs, and then just set and forget (barring any unexpectedly larger backups, which probably should be reported on and investigated, at least I would want to be understanding this as much as I can where the extra data came from).
I guess it might just be different perspectives on the backup management here, but incremental backup sizes are usually fairly predictable and anything that falls outside of that would be something I want to understand what is introducing significant changes. For me, once I understand the capacity planning required for the workloads I need to protect, I'd want to have a scheme that I simply set up once and know what my RPO is and not have a process remove restore points before the retention period I defined is up. There's a little extra work I suppose in the planning stage, but I can rest easier knowing that the RPO I am aiming for will be met instead of wondering if a point within my defined RPO is going to be there or not.
David Domask | Product Management: Principal Analyst
-
- Service Provider
- Posts: 442
- Liked: 80 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Veeam backup keeps hitting storage limit
I can say this is also an issue we have as a service provider.
My scenario is that sometimes available space is filled so backups simply stop working.
For many customers, it's preferable to know there's a recent backup than it is to know that there's for sure a backup that's at least X days old. That is, they'd rather know at any time that they can recover recent data than know they can for sure go back at least 30 days, or however long they have configured. So from that perspective it makes perfect sense to me to at least have an option to delete old backup versions to free up space for a new backup as necessary, rather than the current process of just fail and wait for human intervention.
Most of our cases are backups to a Cloud Connect repo, so to solve the problem, we usually just increase the allocated quota for the customer, so the backups can continue. Then contact them to determine if they want to keep an increased quota and pay more, or reduce their restore points. Still, this requires human intervention for what seems like an easily avoidable task, plenty of other backup softwares provide an option to just keep as many versions as space allows, but there seems to be no good way to configure a comparable option in Veeam that I've found anywhere.
My scenario is that sometimes available space is filled so backups simply stop working.
For many customers, it's preferable to know there's a recent backup than it is to know that there's for sure a backup that's at least X days old. That is, they'd rather know at any time that they can recover recent data than know they can for sure go back at least 30 days, or however long they have configured. So from that perspective it makes perfect sense to me to at least have an option to delete old backup versions to free up space for a new backup as necessary, rather than the current process of just fail and wait for human intervention.
Most of our cases are backups to a Cloud Connect repo, so to solve the problem, we usually just increase the allocated quota for the customer, so the backups can continue. Then contact them to determine if they want to keep an increased quota and pay more, or reduce their restore points. Still, this requires human intervention for what seems like an easily avoidable task, plenty of other backup softwares provide an option to just keep as many versions as space allows, but there seems to be no good way to configure a comparable option in Veeam that I've found anywhere.
Who is online
Users browsing this forum: No registered users and 21 guests