-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Feature request: Per VM backup files with Standard licenses
Hi,
Over the last few months we've run into issues with our main site repository, as well as our remote-backup repository, which is used by our copy-jobs. We've had some really bad luck. We've had power-outages resulting in ReFS going berserk, we've had a harddisk fail in the remote-NAS which for some reason resulted in a few flipped bits --> backup file corruption, and some other neat stuff. It happened that all these corruptions made 2 or 3 (don't remember exactly) VM's not able to backup, because of 'key not found' errors. In the end the advice was to either run an active full, OR delete these specific VM's from the backup files. It seems it's not possible to delete only the backups from the corruption time, the only option is to delete ALL backups for a vm.
But both are terrible options for us. A full backup-copy takes days, and for our file server backup even weeks to complete on that line. That means days or weeks of NO 'redundancy' in backups.
Deleting the corrupted / unavailable VM's completely from the backups turns out NOT to free up the disk space. So then we ran into disk-space issues after the new sync, which were of course then active-fulls for those VM's. Result: yet another repository issue because of disks full.
In our main repository we had ReFS issues, resulting in having to again delete a VM completely from backup, or do an active full. Active full just didn't fit here as well (new backup storage hardware is ordered in the meanwhile, so the space issue will be covered) and deleting those VM's breaches the SLA we have towards customers.
All these issues could have been much less intrusive to our backups and SLA if we had 'per VM backup files' available. We could have deleted the corrupted backup files, resync the repository and go on from there, missing only the corrupted backup points, which were unusable anyway. But our only option was to delete ALL backups for that VM, so it had to be synced all over again.
But alas, we have Veeam standard licenses. Being the small company we are, we have limited budget, and found pretty much nothing in Enterprise that could justify the price for us. In standard, per-VM backup files is not an option.
But I think it should be. In the end, Veeam is about securing our data. Not all companies have big budgets to spend, and I guess there are lots of people that have small lines to a remote site. Corruption just occurs, either because of bad hardware, bad software (ReFS corruptions anyone?) or just bad luck when there are combinations of those options. Veeam Backup customers should be able to recover from that as quickly as possible. Because I am not able to do 'per VM backup files' I'm missing about two weeks of copies on my remote repository, just because 2 VM's from that copy-job had to be completely resynced. In our main-site we have had to remove the effected VM completely, which in the end did still not even free up the space so the new backup almost filled that array as well.
Veeam always tells us to have remote backups, you know the 3-2-1 rule. It would help our company tremendously if Veeam enabled the per-vm option in the standard license. In the end, Veeam should allow and support us to make backups as good and efficient as possible, and secure our data as such. I've just had three cases were our standard licenses made us having to loose backup functionality. I feel that should never have been the case. Even if we are a small company, our customers data is not less important than a large enterprises' data.
Hence: Feature request: Enable per-vm backup files in Veeam Standard.
Over the last few months we've run into issues with our main site repository, as well as our remote-backup repository, which is used by our copy-jobs. We've had some really bad luck. We've had power-outages resulting in ReFS going berserk, we've had a harddisk fail in the remote-NAS which for some reason resulted in a few flipped bits --> backup file corruption, and some other neat stuff. It happened that all these corruptions made 2 or 3 (don't remember exactly) VM's not able to backup, because of 'key not found' errors. In the end the advice was to either run an active full, OR delete these specific VM's from the backup files. It seems it's not possible to delete only the backups from the corruption time, the only option is to delete ALL backups for a vm.
But both are terrible options for us. A full backup-copy takes days, and for our file server backup even weeks to complete on that line. That means days or weeks of NO 'redundancy' in backups.
Deleting the corrupted / unavailable VM's completely from the backups turns out NOT to free up the disk space. So then we ran into disk-space issues after the new sync, which were of course then active-fulls for those VM's. Result: yet another repository issue because of disks full.
In our main repository we had ReFS issues, resulting in having to again delete a VM completely from backup, or do an active full. Active full just didn't fit here as well (new backup storage hardware is ordered in the meanwhile, so the space issue will be covered) and deleting those VM's breaches the SLA we have towards customers.
All these issues could have been much less intrusive to our backups and SLA if we had 'per VM backup files' available. We could have deleted the corrupted backup files, resync the repository and go on from there, missing only the corrupted backup points, which were unusable anyway. But our only option was to delete ALL backups for that VM, so it had to be synced all over again.
But alas, we have Veeam standard licenses. Being the small company we are, we have limited budget, and found pretty much nothing in Enterprise that could justify the price for us. In standard, per-VM backup files is not an option.
But I think it should be. In the end, Veeam is about securing our data. Not all companies have big budgets to spend, and I guess there are lots of people that have small lines to a remote site. Corruption just occurs, either because of bad hardware, bad software (ReFS corruptions anyone?) or just bad luck when there are combinations of those options. Veeam Backup customers should be able to recover from that as quickly as possible. Because I am not able to do 'per VM backup files' I'm missing about two weeks of copies on my remote repository, just because 2 VM's from that copy-job had to be completely resynced. In our main-site we have had to remove the effected VM completely, which in the end did still not even free up the space so the new backup almost filled that array as well.
Veeam always tells us to have remote backups, you know the 3-2-1 rule. It would help our company tremendously if Veeam enabled the per-vm option in the standard license. In the end, Veeam should allow and support us to make backups as good and efficient as possible, and secure our data as such. I've just had three cases were our standard licenses made us having to loose backup functionality. I feel that should never have been the case. Even if we are a small company, our customers data is not less important than a large enterprises' data.
Hence: Feature request: Enable per-vm backup files in Veeam Standard.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Hello,
thanks for the feature request and the explanation.
A couple of tips.
It's not possible to delete VM backup data from backups of several VMs. If you want to stop backing up a VM, remove it from the source in the job configuration and add to another job, thus you don't need to create an active full for all VMs. And the corrupted backup data will go in accordance with retention of the original job.
As for the request, it was not my decision to make per-VM backup an Enterprise feature, but the logic seems clear to me. If you are short on budget and have rather small infrastructure, you can create more backup jobs, up to job for each VM. By the way, how many do you have? For enterprise the feature is essential as they have thousands of VMs.
In any case your feedback is heard and taken into account.
thanks for the feature request and the explanation.
A couple of tips.
It's recommended to perform active full periodically, especially if you don't perform recovery verification.In the end the advice was to either run an active full, OR delete these specific VM's from the backup files. It seems it's not possible to delete only the backups from the corruption time, the only option is to delete ALL backups for a vm.
It's not possible to delete VM backup data from backups of several VMs. If you want to stop backing up a VM, remove it from the source in the job configuration and add to another job, thus you don't need to create an active full for all VMs. And the corrupted backup data will go in accordance with retention of the original job.
As for the request, it was not my decision to make per-VM backup an Enterprise feature, but the logic seems clear to me. If you are short on budget and have rather small infrastructure, you can create more backup jobs, up to job for each VM. By the way, how many do you have? For enterprise the feature is essential as they have thousands of VMs.
In any case your feedback is heard and taken into account.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Ah yes, the active full on ReFS question. See post237724.html#p237724. I've never had an answer there. Active Full is actually my prefered way, but it kills (at least it did) ReFS magic as is does not do block cloning. We just don't have enough space for that.
We have only about 30 VM's, but are also severely limited on storage. As said, I've just ordered a 120TB storage server, which keeps me out of trouble for quite a while. Still, I'm not sure if you actually understood what I mean.
As said, we still want to backup that specific VM, so removing it from backup is not an option either.
While it is possible to create a job for each VM, I'm sure you can agree that's really cumbersome, even if we have only 30-or-so VM's.
We have only about 30 VM's, but are also severely limited on storage. As said, I've just ordered a 120TB storage server, which keeps me out of trouble for quite a while. Still, I'm not sure if you actually understood what I mean.
I DO want to backup the VM. So I DON'T want to remove it. I ran into 'keyset not found' issues twice for a few VM's. Support told me that if we had per-vm backups, we could have deleted the corrupted VIB files, resync the repository, and run a backup. It would do an incremental from the point of the last working backup for that VM. But I can't do that now - if I delete a backup file on disk, I delete restore points for ALL VM's. And I can't delete just the corrupted backup points for a given VM, only ALL backup points for a VM. Either way, when we run into this 'keyset not found' issue, I MUST delete either all backups for the affected VM, or restore points for ALL vm's up to the point the corruption for that single VM occured, if I still want to backup that same VM. I'm sure you can agree neither is a good starting point. Having backup files per VM would have avoided that. I actually feel it a bit as crippling our data security 'on purpose' because of we don't have budget nor need for Enterprise features. Being able to properly backup a VM after issues while keeping backups from other VM's, OR keep backups that are actually fine for a VM should not even be considered a feature, it should be standard. Having to do a full active backup on ALL VM's in a job when ONE VM has an issue is not a clever idea either to me. That's just a workaround, not a fix.It's not possible to delete VM backup data from backups of several VMs. If you want to stop backing up a VM, remove it from the source in the job configuration and add to another job, thus you don't need to create an active full for all VMs. And the corrupted backup data will go in accordance with retention of the original job.
As said, we still want to backup that specific VM, so removing it from backup is not an option either.
While it is possible to create a job for each VM, I'm sure you can agree that's really cumbersome, even if we have only 30-or-so VM's.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Support team is correct. What I meant is that instead for deleting corrupted backups, you can start new chain for the inspected VMs. Existing healthy backups will be kept until deleted by retention. You are right, that's a workaround.Support told me that if we had per-vm backups, we could have deleted the corrupted VIB files, resync the repository, and run a backup. It would do an incremental from the point of the last working backup for that VM. But I can't do that now - if I delete a backup file on disk, I delete restore points for ALL VM's.
I understand that it's much more convenient to manage 3 jobs instead of 30, however, 30 VMs is not a huge number.
Let's see what other people think of that.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
How?you can start new chain for the inspected VMs. Existing healthy backups will be kept until deleted by retention.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Create new backup job or add the VM to another job. It will require active full of just 1 VM.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
If I'd do that, and after that enable the vm in the original job again (as we have jobs for specific VM's, like an Exchange job that holds the Exchange servers) would that work? If so, I wonder why support didn't mention that but told me to do an active full of the whole job?
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Once the corrupted data is deleted by retention from the initial job chain, you can add the VM back and it will be backed up with other VMs.
What backup method are you using, forever forward incremental?
What backup method are you using, forever forward incremental?
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Yeah that's what I thought, and yet again that's not a fix but a workaround, a double one even. So it turns out, if we have standard licenses with no backup file per vm, and one VM goes berzerk, your backup plan as you use it in your company cannot be used anymore. Waiting for retention before 'properly' being able to backup a VM in it's intended job? That's really not acceptable.
Just out of my head I see several options:
- Per VM backup files
- Allow us to delete (or flag as delete) specific restore points for a specific VM. In the properties of a Backup job, an option could be added to flag specific points. A new job could then pickup from the last proper chain. A bit like deleting per-VM-files, but from another approach.
Another thing related to this, which I mentioned here post307223.html#p307223 is still valid here. I yet have to stress again (sorry) that we are a small company and storage IS expensive for us. We don't have petabytes of storage on our remote store. Hence, when something like this happens, we should be able to reclaim space used by deleted VM's, without waiting for retention time (which could be a LONG time). Especially if we more or less MUST do an active full to get it fixed again. I know they get removed for new fulls, either active or synthehtic, but all that data in the previous files, also the previous fulls, keeps wasting space. Also sometimes we actually need to delete data (because a customer left us for example) and according to the GDPR and AVG laws over here, I actually MUST delete data. Not just 'flag as deleted' while the data is still there.
So related to this is a third suggestion:
- Allow Defragment and compact on jobs where active fulls are enabled. It really does make sense for us.
Thanks for looking at this in a serious matter, and I really hope it makes Veeam more viable for small companies like us, that can't follow the 'big enterprise' way that Veeam mostly advocates.
Just out of my head I see several options:
- Per VM backup files
- Allow us to delete (or flag as delete) specific restore points for a specific VM. In the properties of a Backup job, an option could be added to flag specific points. A new job could then pickup from the last proper chain. A bit like deleting per-VM-files, but from another approach.
Another thing related to this, which I mentioned here post307223.html#p307223 is still valid here. I yet have to stress again (sorry) that we are a small company and storage IS expensive for us. We don't have petabytes of storage on our remote store. Hence, when something like this happens, we should be able to reclaim space used by deleted VM's, without waiting for retention time (which could be a LONG time). Especially if we more or less MUST do an active full to get it fixed again. I know they get removed for new fulls, either active or synthehtic, but all that data in the previous files, also the previous fulls, keeps wasting space. Also sometimes we actually need to delete data (because a customer left us for example) and according to the GDPR and AVG laws over here, I actually MUST delete data. Not just 'flag as deleted' while the data is still there.
So related to this is a third suggestion:
- Allow Defragment and compact on jobs where active fulls are enabled. It really does make sense for us.
Thanks for looking at this in a serious matter, and I really hope it makes Veeam more viable for small companies like us, that can't follow the 'big enterprise' way that Veeam mostly advocates.
-
- Enthusiast
- Posts: 25
- Liked: 2 times
- Joined: Apr 25, 2017 1:58 am
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
I'm not sure why per-VM files is not the only option available? Why does anyone want large per-job files? Yes, per-VM files can be simulated by have a on VM per backup job but that just makes the backup admins life miserable. An unhappy admin is not likely to recommend the software.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Well, as much as I'd like to use 'per Vm files' there's one thing I'd miss in terms of efficiency, and that's the inline dedupe. Inline dedupe only works on backup file level (obviously, when you'd reference a block in another file, which got deleted, your backup file is corrupt). But while we're always short on diskspace, I'd rather have a more reliable backup. So yeah, I think you are right: The only good way is to allow all Veeam licenses per-VM backups.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
In some cases it's easier to deal with one backup file than with several as per-VM option offers.I'm not sure why per-VM files is not the only option available? Why does anyone want large per-job files? Yes, per-VM files can be simulated by have a on VM per backup job but that just makes the backup admins life miserable. An unhappy admin is not likely to recommend the software
And as RGijsen mentioned, deduplication/compression help decrease size of the backups. However, if you have dedup appliance, that's not a benefit.
-
- Enthusiast
- Posts: 38
- Liked: 13 times
- Joined: Mar 22, 2013 10:35 am
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Veeam is a tool and similar to all your other equipment you have to use a tool correctly, based on the design you made which balances pros and cons.
In your case that would be:
- GDPR and customer data -> one backup job per customer. If you need it per VM -> one backup job per VM.
- Active fulls: personally I always schedule one every three months, with four stacked sets (1st Jan/Apr/..., 2nd Feb/May/..., etc.) to minimize capacity impact. If your WAN cannot handle it, then it is up to you to choose whether not to implement Active Fulls, or to create a realistic procedure. The latter one most often is an external USB drive, a local Active Full and a seeded copy. Having an environment that would never support an Active Full is just wrong and will eventually come back to bite you, as it did.
- If your capacity does not allow standard operations to reach a new stable condition, then yes, you must implement workarounds, perhaps multiple and you may have to do some serious out-of-the-box thinking.
Budgets, tools and choices written down in a design. In an ideal world you can influence all three, in reality most likely on the the last two, worst-case only the last one.
In your case that would be:
- GDPR and customer data -> one backup job per customer. If you need it per VM -> one backup job per VM.
- Active fulls: personally I always schedule one every three months, with four stacked sets (1st Jan/Apr/..., 2nd Feb/May/..., etc.) to minimize capacity impact. If your WAN cannot handle it, then it is up to you to choose whether not to implement Active Fulls, or to create a realistic procedure. The latter one most often is an external USB drive, a local Active Full and a seeded copy. Having an environment that would never support an Active Full is just wrong and will eventually come back to bite you, as it did.
- If your capacity does not allow standard operations to reach a new stable condition, then yes, you must implement workarounds, perhaps multiple and you may have to do some serious out-of-the-box thinking.
Budgets, tools and choices written down in a design. In an ideal world you can influence all three, in reality most likely on the the last two, worst-case only the last one.
-
- Lurker
- Posts: 1
- Liked: 2 times
- Joined: Jan 15, 2019 10:58 pm
- Full Name: CSFG IT Admin
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
Duuuuude! You are describing the exact same problem I had, 3 times. Called support and got the same response "oh, looks like youll need to do another active full" , "Oh you dont have a second disk backup? 3-2-1 brrrrooooo" , "oh you dont have an entire half a backup repo just sitting around just in case you have to take a new chain from scratch?" B please, at any time I have about 10% spare capacity in my repos. Yeah I got space in prod but now im getting upset.
Honestly though, we got an incredible deal for our measeley 4 sockets (we are academic), but even that was so hard to squeeze out of our budget. And you know what - besides this pretty big oops (where 2 vms out of 30 corrupt on 1 incremental and ruins the entire chain) - if veeam had cost us 5 times what we paid it would have still been worth it. I have used tivoli and backupexec in huge (500+ servers, vms) deployments as a contractor, I remember litterally thinking to myself shit, if a company this big puts up with garbage like this I guess this is (at least close to) as good as it gets...
Anyways, I donno if it's too late or not but I wanted to say I fixed it - move the corrupt incremental and the metadata file out of the folder. Rescan repo (or manually import it?). It'll warn you that the metadata is missing and it will do a full scan to rebuild. You'll lose all the other vms for that day but at least you arent messing with a whole new full from scratch.
What frustrates me though is this ni file per vm business on standard, I love you guys but it litterally feels like a big shaft. I get all the other ent features, they are all super cool things I wish I could get but I can make do without and they are advanced enough that I understand why they cost extra. but the per-vm files thing just seems like more effort to put it all in one big file than to chop it up but then again im not a dev.
Andrei
Honestly though, we got an incredible deal for our measeley 4 sockets (we are academic), but even that was so hard to squeeze out of our budget. And you know what - besides this pretty big oops (where 2 vms out of 30 corrupt on 1 incremental and ruins the entire chain) - if veeam had cost us 5 times what we paid it would have still been worth it. I have used tivoli and backupexec in huge (500+ servers, vms) deployments as a contractor, I remember litterally thinking to myself shit, if a company this big puts up with garbage like this I guess this is (at least close to) as good as it gets...
Anyways, I donno if it's too late or not but I wanted to say I fixed it - move the corrupt incremental and the metadata file out of the folder. Rescan repo (or manually import it?). It'll warn you that the metadata is missing and it will do a full scan to rebuild. You'll lose all the other vms for that day but at least you arent messing with a whole new full from scratch.
What frustrates me though is this ni file per vm business on standard, I love you guys but it litterally feels like a big shaft. I get all the other ent features, they are all super cool things I wish I could get but I can make do without and they are advanced enough that I understand why they cost extra. but the per-vm files thing just seems like more effort to put it all in one big file than to chop it up but then again im not a dev.
Andrei
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
@Thomas, all correct. I can only add 2 cents that if active full is not an option because of WAN bandwidth, go for Surebackup verification.
@Andrei, thanks for the kind words, your feature request is taken into account.
@Andrei, thanks for the kind words, your feature request is taken into account.
-
- Expert
- Posts: 127
- Liked: 29 times
- Joined: Oct 10, 2014 2:06 pm
- Contact:
Re: Feature request: Per VM backup files with Standard licenses
And yet again after replacing our remote storage we ran into the same 'key not found' error, for the same VM even. Support still tells me my storage is not working correctly, and I understand that but I'm finding it hard to believe by now. And again, no way out but run an active full for another 10 days.
Guys, please really consider this feature, it's basically a requirement for being able to properly backup. All Thomas' considerations are all fine and well, but we just can't all do just that. No capacity, either WAN and storage.
Guys, please really consider this feature, it's basically a requirement for being able to properly backup. All Thomas' considerations are all fine and well, but we just can't all do just that. No capacity, either WAN and storage.
That must be another enterprise only feature, as we don't have it. And yet again that's a great feature, and I understand some features even SHOULD be licensed, but not feature that I need to make reliable backups in the first place.@Thomas, all correct. I can only add 2 cents that if active full is not an option because of WAN bandwidth, go for Surebackup verification.
Who is online
Users browsing this forum: No registered users and 146 guests