Hey guys - I have a Veeam VM running 9.5.1038 on Windows 2016 with 6 vCPU and 16GB of RAM. I have about 20 VMs I am backing up. However, one VM alone is about 12 - 14TB. The remaining VMs are ~800GB or so... mostly small Linux stuff and appliances. That said, everything runs great until Saturday when it goes to do a synthetic full. I have very very slow storage for this. It's a RAID5 of three 8TB SATA disks and doesn't have much to offer in the way of IOPS. I am going to be revising the storage soon, however, during the synthetic full the VM hangs - CPU 100%, can't RDP to it, etc. I am using ReFS in 2016 w/ 64k block/cluster size. I am not certain if ReFS has any say in this issue but I could convert to NTFS if it'd be more straightforward. I guess my only other option is to not do synthetic fulls. I can do active fulls every so often and just know it'll take ~24 hours to run.
Any thoughts?
-
- Influencer
- Posts: 18
- Liked: 3 times
- Joined: Aug 03, 2016 8:20 pm
- Full Name: None
- Contact:
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Veeam VM throws up during large Synthetic Full
It's very likely that ReFS is at least one source of your issues as this is a classic symptom. Do you have all of the latest Windows updates installed? More memory might help to mitigate the issue, with ReFS a good rule of thumb is 1GB of RAM for every 1TB of storage. Certainly NTFS is more stable and doesn't have the RAM issue, but if your storage is very slow you might find that it takes even longer than an active full.
-
- Enthusiast
- Posts: 33
- Liked: 2 times
- Joined: May 05, 2017 3:06 pm
- Full Name: JP
- Contact:
Re: Veeam VM throws up during large Synthetic Full
I've been sporadically having this problem for quite some time as well and I also noticed it seems to be triggered by synthetic fulls. I opened a case on it and didn't get any resolution, but a few things I tried seemed to help. First, I disabled Windows Defender and the problem went away for a while, but then came back. Most recently even if I hard reset the server, CPU would go back to 100% when the Veeam services would start. I disabled the services and installed the latest Server 2016 cumulative updates and then I was able to start the Veeam services again. However, this did not permanently resolve the problem as it happened again. The next thing I'm going to try is to disable inline deduplication as this thread mentions it as a workaround for a similar issue, for which there may be a hotfix already. I have a new case open [02362914] and I'm also tracking this thread which a couple of users have mentioned similar behavior as we are having.5mall5nail5 wrote:Hey guys - I have a Veeam VM running 9.5.1038 on Windows 2016 with 6 vCPU and 16GB of RAM. I have about 20 VMs I am backing up. However, one VM alone is about 12 - 14TB. The remaining VMs are ~800GB or so... mostly small Linux stuff and appliances. That said, everything runs great until Saturday when it goes to do a synthetic full. I have very very slow storage for this. It's a RAID5 of three 8TB SATA disks and doesn't have much to offer in the way of IOPS. I am going to be revising the storage soon, however, during the synthetic full the VM hangs - CPU 100%, can't RDP to it, etc. I am using ReFS in 2016 w/ 64k block/cluster size. I am not certain if ReFS has any say in this issue but I could convert to NTFS if it'd be more straightforward. I guess my only other option is to not do synthetic fulls. I can do active fulls every so often and just know it'll take ~24 hours to run.
Any thoughts?
-
- Enthusiast
- Posts: 37
- Liked: 8 times
- Joined: Sep 27, 2016 6:59 pm
- Contact:
Re: Veeam VM throws up during large Synthetic Full
I was having this issue sporadically, primarily with a backup job of several VMs totalling ~2TB.
I had opened a ticket but there was no definitive answer, and rebooting the backup server (also Server 2016 w/256GB RAM) seemed to clear the problem, and I closed the ticket.
After awhile though, incremental backup times crept upwards and synthetic fulls took half a day sometimes.
End of last week I also noticed that I would get batches of Warning event 37 Kernel-Processor-Power occurring on my hosts and backup server. Reading up on it, I saw some suggestions that my Power Plan in Windows may be the culprit, and that this could cause unusual and unexpected performance issues in 2016 and Win10. Sure enough, my Hyper-V hosts and backup server were all running Balanced power plan with Processor Power Management Minimum and Maximum Power State set to 100%. We don't tax our servers that much, but I switched it to High Performance with Processor Power Management Minimum set to 50% and Maximum Power State set to 100%. Since then, incrementals and synthetic fulls have been MUCH faster (incrementals that took 1+hr are down to ~6min).
I want to see it run like this for a month or so before I'm convinced that was the solution, but the change was pretty dramatic, so I wanted to throw it out there.
I had opened a ticket but there was no definitive answer, and rebooting the backup server (also Server 2016 w/256GB RAM) seemed to clear the problem, and I closed the ticket.
After awhile though, incremental backup times crept upwards and synthetic fulls took half a day sometimes.
End of last week I also noticed that I would get batches of Warning event 37 Kernel-Processor-Power occurring on my hosts and backup server. Reading up on it, I saw some suggestions that my Power Plan in Windows may be the culprit, and that this could cause unusual and unexpected performance issues in 2016 and Win10. Sure enough, my Hyper-V hosts and backup server were all running Balanced power plan with Processor Power Management Minimum and Maximum Power State set to 100%. We don't tax our servers that much, but I switched it to High Performance with Processor Power Management Minimum set to 50% and Maximum Power State set to 100%. Since then, incrementals and synthetic fulls have been MUCH faster (incrementals that took 1+hr are down to ~6min).
I want to see it run like this for a month or so before I'm convinced that was the solution, but the change was pretty dramatic, so I wanted to throw it out there.
Who is online
Users browsing this forum: Egor Yakovlev, joast, Majestic-12 [Bot] and 286 guests