-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Large MSCS Cluster backup bloating since V12
Hi forum peeps!
I currently have an issue where an Active Full backup of a Microsoft MSCS Cluster with the Veeam Agent is bloating since the upgrade to BR V12 and Agent V6 (Support case 06013388)
The cluster has 95TB of data on 155TB of provisioned disks. The Veeam job report states it processes 94.4TB with 71.2TB transferred, however, the backup file on the ReFS repo is 258TB (typically we see 72TB on disk). We have had a couple of tickets years ago to get everything set correctly to stop it bloating, so wonder what regression has taken place or settings that have been reset.
Knock on affects are it has used most of the spare space we have on our Repos, so I cannot risk another bloated backup filling things up and also that I need to get this data air gapped, so have had to stop the main backup job running until the tape backup is complete (yes I can clone the job and do a new active full, but have no confidence this won't bloat).
Has anyone else seen this issue?
I also welcome how others are backing up large file clusters (we will hopefully getting on prem block storage next year).
Thanks
Stu
I currently have an issue where an Active Full backup of a Microsoft MSCS Cluster with the Veeam Agent is bloating since the upgrade to BR V12 and Agent V6 (Support case 06013388)
The cluster has 95TB of data on 155TB of provisioned disks. The Veeam job report states it processes 94.4TB with 71.2TB transferred, however, the backup file on the ReFS repo is 258TB (typically we see 72TB on disk). We have had a couple of tickets years ago to get everything set correctly to stop it bloating, so wonder what regression has taken place or settings that have been reset.
Knock on affects are it has used most of the spare space we have on our Repos, so I cannot risk another bloated backup filling things up and also that I need to get this data air gapped, so have had to stop the main backup job running until the tape backup is complete (yes I can clone the job and do a new active full, but have no confidence this won't bloat).
Has anyone else seen this issue?
I also welcome how others are backing up large file clusters (we will hopefully getting on prem block storage next year).
Thanks
Stu
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
that sounds odd and I asked support to escalate the case.
Best regards,
Hannes
that sounds odd and I asked support to escalate the case.
Best regards,
Hannes
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Thanks @HannesK.
Apologies for putting a second slightly off topic question "I also welcome how others are backing up large file clusters" which can be ignored, as I'll start a new topic for that at a later date (I can't seem to edit the original post)
Thanks, Stu.
Apologies for putting a second slightly off topic question "I also welcome how others are backing up large file clusters" which can be ignored, as I'll start a new topic for that at a later date (I can't seem to edit the original post)
Thanks, Stu.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
no worries, I skipped that question because the way you do it is fine and the problem should be fixed
There were other challenges with clusters and agent-based backup that are solved in V12 (meaning that backup copy jobs bloated the backup size). Some customers also do NAS backup, but usually costs prevent that.
Best regards,
Hannes
PS: yes, one can only edit a few minutes after posting
no worries, I skipped that question because the way you do it is fine and the problem should be fixed
There were other challenges with clusters and agent-based backup that are solved in V12 (meaning that backup copy jobs bloated the backup size). Some customers also do NAS backup, but usually costs prevent that.
Best regards,
Hannes
PS: yes, one can only edit a few minutes after posting
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Thanks for that info Hannes, my previous ticket (Case 05911589) was about a copy job of this job bloating on V11... with the copy of 72TB bloating to 384TB (it was 33% complete when I stopped it and the file was already 128TB) and a copy to Immutable is a quick way for us to get a virtual air gap in addition to a tape backup. I was basically told this was expected due to "undeduplication" but my point about how the copy could be larger than the original source data was never satisfactorily addressed, so really good to hear this may now be fixed.
Thanks, Stu.
Thanks, Stu.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
I've been asked by support to create an undocumented registry key on the repository servers (which was not required before V12 when backups were fine). I'm testing this on a smaller MSCS backup job to see if they work. I ran this smaller job last night and it also resulted in a bloated backup (25TB became 59TB), so the results should prove if these changes will work for the larger job (or at least give me some hope).
-
- Enthusiast
- Posts: 34
- Liked: 9 times
- Joined: Nov 23, 2011 11:18 pm
- Full Name: Cristianno Cumer
- Contact:
[MERGED] Windows Cluster backup - backup used twice the size of the disks
Hello,
I'm backing up a Windows Failover Cluster with the Veeam Agent. I have noted that the actual backup size is the double of the space used by the shared disks, like if the backup agent is performing the backup two times, one for each node. Is this the expected behaviour?
I have also opened a SR (06040961) regrind this issue, but I was wondering if I'm the only one with this kind of problem.
I'm backing up a Windows Failover Cluster with the Veeam Agent. I have noted that the actual backup size is the double of the space used by the shared disks, like if the backup agent is performing the backup two times, one for each node. Is this the expected behaviour?
I have also opened a SR (06040961) regrind this issue, but I was wondering if I'm the only one with this kind of problem.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
no, that's not expected. I merged your question to a similar topic.
Backups should never double the space. Backup copy jobs doubled the space until V11 and should not do that in V12 anymore.
Best regards,
Hannes
no, that's not expected. I merged your question to a similar topic.
Backups should never double the space. Backup copy jobs doubled the space until V11 and should not do that in V12 anymore.
Best regards,
Hannes
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
An update... The registry entry supplied by support did not fix my issue, it did however reduce it somewhat from 258TB to 167TB. I still do not know the root cause.
What fixed it for me was applying the v12 patch version 12.0.0.1420 P20230412 to B&R and the Windows Agents Upgraded to 6.0.2.1090, which resulted in my normal backup size of 72TB. After the patch, I also recreated the backup job from scratch, which may or may not have been required.
I have today submitted logs from the job after the patch, so hopefully support will be able to spot what caused this issue.
What fixed it for me was applying the v12 patch version 12.0.0.1420 P20230412 to B&R and the Windows Agents Upgraded to 6.0.2.1090, which resulted in my normal backup size of 72TB. After the patch, I also recreated the backup job from scratch, which may or may not have been required.
I have today submitted logs from the job after the patch, so hopefully support will be able to spot what caused this issue.
-
- Enthusiast
- Posts: 34
- Liked: 9 times
- Joined: Nov 23, 2011 11:18 pm
- Full Name: Cristianno Cumer
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hi Stupots,
it's still working for you? I have applied the patch, but my issue persists.
Kind regards
Cristiano
it's still working for you? I have applied the patch, but my issue persists.
Kind regards
Cristiano
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
It's still OK at the moment.
You could try recreating the backup job, I did that as a step off my own back, just in case there was something hidden in the v11 job that was causing an issue with v12. There is nothing I can find to suggest that is the case, but worth trying as it only takes a few mins. I didn't recreate the Cluster Application Group, just the job.
The ticket is still open, so I still don't know the root cause. Good luck.
You could try recreating the backup job, I did that as a step off my own back, just in case there was something hidden in the v11 job that was causing an issue with v12. There is nothing I can find to suggest that is the case, but worth trying as it only takes a few mins. I didn't recreate the Cluster Application Group, just the job.
The ticket is still open, so I still don't know the root cause. Good luck.
-
- Enthusiast
- Posts: 34
- Liked: 9 times
- Joined: Nov 23, 2011 11:18 pm
- Full Name: Cristianno Cumer
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hi, by re-creating you mean running an active full or to create a totally new backup. I guess the second one. I will keep this in mind, meanwhile I will wait for further feedback from support. generating another 70 TB of duplicated data could put my repository under stress...
Thanks!
Thanks!
-
- Enthusiast
- Posts: 34
- Liked: 9 times
- Joined: Nov 23, 2011 11:18 pm
- Full Name: Cristianno Cumer
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Update:
I got a call from Veeam Support. The base issue is that after a given amount of blocks, deduplication is no longer taking effect. There are two workaround:
- increase the limit (to a supported value)
- increase the block size del backup
On the other hand I got confirmation that backup copy jobs will suffer bloat anyway.
I got a call from Veeam Support. The base issue is that after a given amount of blocks, deduplication is no longer taking effect. There are two workaround:
- increase the limit (to a supported value)
- increase the block size del backup
On the other hand I got confirmation that backup copy jobs will suffer bloat anyway.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
yes, I remember limits around block size.
backup job
backup copy job
Best regards,
Hannes
yes, I remember limits around block size.
EDIT: that is correct for the network traffic. On the repository side, it will be deduplicated. Something is wrong if backup copy jobs double the backup size. That's V11 behavior. Not V12.On the other hand I got confirmation that backup copy jobs will suffer bloat anyway.
backup job
backup copy job
Best regards,
Hannes
-
- Enthusiast
- Posts: 34
- Liked: 9 times
- Joined: Nov 23, 2011 11:18 pm
- Full Name: Cristianno Cumer
- Contact:
Re: Large MSCS Cluster backup bloating since V12
@HannesK, Yes you are right, the bloating is resolved also for the copy jobs in v12, both with FS based and Object based repositories.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
it was also solved for virtual synthetic fulls for tape. If you see something else, I need the support case number please
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
We got our second monthly full in the bag yesterday and it is once again normal size and I fed this info back to our Veeam senior support specialist. They also let me know yesterday that they have found the root cause and it will be fixed in the next release, but there will not be a hot-fix available to resolve the issue before then. They were not able to give me more details about the cause than that.
As I stated above, I suspect a combination of upgrading to the latest patch level and recreating the job again fixed things for us.
As I stated above, I suspect a combination of upgrading to the latest patch level and recreating the job again fixed things for us.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
I have just opened a ticket (06122073) for the copy still bloating (the main active full monthly is fine). It looked to be going well... as it got to 99% when viewing in the "Jobs > Backup" tab and it was about the correct size on disk of around 70TB... Later I noticed it still stuck at 99%... I then look in "Last 24 Hours > Running" tab, it was showing 99% for the main job then 27% in the details.... Bearing in mind the copy backup file had reached 81.6TB by now, this would mean a copy backup size of 302TB at 100% and take 8-9 days to complete (for 27% processed, It had already been running for 39 hours!).
@HannesK would it be best for me to open a new thread to discuss this?
Thanks, Stu.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
the new support case is fine here. Backup copy jobs in V12 must not bloat backup files at the target. If it does something else, then support needs to investigate and escalate to R&D if needed.
The only thing that is bloating is network traffic (documented here).
Best regards,
Hannes
the new support case is fine here. Backup copy jobs in V12 must not bloat backup files at the target. If it does something else, then support needs to investigate and escalate to R&D if needed.
The only thing that is bloating is network traffic (documented here).
Best regards,
Hannes
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Thanks, they said it appears to be "fully related" to the copy job, so have passed it through to the VBR team.
A bit more info on my job below, which I aborted at 27% (Which based on that would have grown to 302TB and taken 145h hours?)
A bit more info on my job below, which I aborted at 27% (Which based on that would have grown to 302TB and taken 145h hours?)
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
it looks wrong to me, but support needs to find the root cause.
point is: if backups bloat in V12 with backup copy jobs, then it's a bug (not a feature)
backup copy jobs don't care whether the backup job was active / synthetic full or an incremental backup. Backup copy jobs only copy the data that is needed and create an incremental restore point on the backup copy target (except GFS is configured on the backup copy job)
point is: if backups bloat in V12 with backup copy jobs, then it's a bug (not a feature)
backup copy jobs don't care whether the backup job was active / synthetic full or an incremental backup. Backup copy jobs only copy the data that is needed and create an incremental restore point on the backup copy target (except GFS is configured on the backup copy job)
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
The bloated copy backup issue has not been resolved.
Support told me "Giving the size of the file, deduplication may become less effective or even not applicable and with a 80tb file, finding identical blocks becomes increasingly unlikely. In your situation, an Active full backup will be needed to address this issue".
I am running the copy job directly after the Active Full... I'm struggling to get my head around why an active full can be 72TB and the Copy Backup was estimated to be 300TB (when there is only 95TB of data on the source cluster).
Can we please official call this a bug? Or can someone please write me a dummies guide to copy backups
Support told me "Giving the size of the file, deduplication may become less effective or even not applicable and with a 80tb file, finding identical blocks becomes increasingly unlikely. In your situation, an Active full backup will be needed to address this issue".
I am running the copy job directly after the Active Full... I'm struggling to get my head around why an active full can be 72TB and the Copy Backup was estimated to be 300TB (when there is only 95TB of data on the source cluster).
Can we please official call this a bug? Or can someone please write me a dummies guide to copy backups
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
please keep the case open. There is no bug confirmation or solution in the case. I asked support to check, because the answer of the engineer make no sense to me right now.
Best regards,
Hannes
please keep the case open. There is no bug confirmation or solution in the case. I asked support to check, because the answer of the engineer make no sense to me right now.
Best regards,
Hannes
-
- Service Provider
- Posts: 442
- Liked: 79 times
- Joined: Apr 29, 2022 2:41 pm
- Full Name: Tim
- Contact:
Re: Large MSCS Cluster backup bloating since V12
It may be unrelated, but I recently had a case where the backup file being fragmented on the repository caused both significant bloating of the incremental backup files, about 30%-50% increase, and caused a very significant increase in data transferred to check for changes, about 40x. This was with a simple Mac Agent backup to a Cloud Connect Repository. The issue was determined to be a flaw in the Mac Agent itself, but it makes me wonder if your case could similarly be related to your backup file or files being significantly fragmented on the repository.
The end solution was just redo a full backup periodically as needed until the Mac Agent gets fixed, but based on the explanation defragmenting the file could have also solved the issue.
I'm not entirely sure how file fragmentation would end up looking with such a large file on a deduplicated file system, but it could be worth at least checking the level of fragmentation for the affected file on the repository to see if it seems high.
Also, could be entirely unrelated, but I thought I'd suggest it even if it's unlikely since the issue's been going on for a while now. I've been watching this from the beginning, at first thinking it might be related to some our file bloating issues, but it turned out not to be.
The end solution was just redo a full backup periodically as needed until the Mac Agent gets fixed, but based on the explanation defragmenting the file could have also solved the issue.
I'm not entirely sure how file fragmentation would end up looking with such a large file on a deduplicated file system, but it could be worth at least checking the level of fragmentation for the affected file on the repository to see if it seems high.
Also, could be entirely unrelated, but I thought I'd suggest it even if it's unlikely since the issue's been going on for a while now. I've been watching this from the beginning, at first thinking it might be related to some our file bloating issues, but it turned out not to be.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Large MSCS Cluster backup bloating since V12
This has been resolved. Support asked if I'd added an undocumented entry to a config file on the linux repo server, of course I hadn't as it was undocumented... "DedupeIndexLimit"
Creating a VeeamAgentConfig file in /etc/ and assigning a value to DedupeIndexLimit fixed my issue... Now it has an initial 72TB full backup copy which took 6.5 days, it now creates "full" 72TB backup in about 20 minutes using XFS cleverness.
I have not fully documented the value as this can negatively affect your backup server if it doesn't have sufficient memory (ours has 512GB), feel free to raise a support case and quote my case number "06122073" as a potential fix, if you have bloating of your copy backup over and above your standard full backup.
Creating a VeeamAgentConfig file in /etc/ and assigning a value to DedupeIndexLimit fixed my issue... Now it has an initial 72TB full backup copy which took 6.5 days, it now creates "full" 72TB backup in about 20 minutes using XFS cleverness.
I have not fully documented the value as this can negatively affect your backup server if it doesn't have sufficient memory (ours has 512GB), feel free to raise a support case and quote my case number "06122073" as a potential fix, if you have bloating of your copy backup over and above your standard full backup.
-
- Product Manager
- Posts: 14759
- Liked: 3044 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Large MSCS Cluster backup bloating since V12
Hello,
thanks for your patience. I also discussed the workarounds and the root cause with the team and we need to improve the software to avoid such things in future.
Best regards,
Hannes
thanks for your patience. I also discussed the workarounds and the root cause with the team and we need to improve the software to avoid such things in future.
Best regards,
Hannes
Who is online
Users browsing this forum: No registered users and 6 guests