-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Memory usage on backup proxy server
Over the last year or so, I've been paying attention to my memory usage on my Veeam server. It is a VM. I remember reading somewhere that it will take as much RAM as you're willing to give it, but lately it has been using up all the RAM making the server nearly unresponsive. I've also noticed that it uses the Windows Swap file a lot. When I saw how much it was paging, I decided to add more RAM to it and bumped it from 8GiB to 12 GiB. It still gobbles up all the RAM making the machine crawl during backups and it still pages a ton.
I also have jobs that sometimes slow to a painful crawl, in fact I have jobs that are still running from last night that typically finish by 5:30am or so, but are still running. Although I think that may be related to the new Drobo B800i that I put in for a disk target for the backups. It seems like that can take a single write stream OK, but throw multiple write streams at it and performance dies for some reason.
Screenshot of it being relatively unresponsive:
Screenshot of poor performance on a specific job:
Screenshot of typical performance on the same job:
Screenshot showing Memory usage:
Screenshot showing Pagefile usage:
I also have jobs that sometimes slow to a painful crawl, in fact I have jobs that are still running from last night that typically finish by 5:30am or so, but are still running. Although I think that may be related to the new Drobo B800i that I put in for a disk target for the backups. It seems like that can take a single write stream OK, but throw multiple write streams at it and performance dies for some reason.
Screenshot of it being relatively unresponsive:
Screenshot of poor performance on a specific job:
Screenshot of typical performance on the same job:
Screenshot showing Memory usage:
Screenshot showing Pagefile usage:
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
Could we see Processes tab with processes sorted by Memory usage (to understand which processes eat so much RAM), as well as Performance tab for Physical memory stats.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
The jobs are finally catching up so there are only two running right now, memory use is down to 50%. What's strange to me is that none of the processes show that much apparent memory usage, but something is gobbling up the memory, and when I look at the memory graphs, it's always tied to when Veeam is processing backup jobs.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
So, looks like you have 6 GB of physical memory is available... cannot really see any memory usage problems above. Still 6GB to go before system will start to swap. All Veeam processes are looking good as well, using just about 500MB collectively.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
That's how it appears. I'll take screenshots tonight when more jobs are running and memory is capped out.
Does Veeam use up all the available memory even though it doesn't really show that in Task Manager?
Does Veeam use up all the available memory even though it doesn't really show that in Task Manager?
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
Sounds like conspiracy theory! No, the only way to get memory from OS is to request it within a process. OS then grants and measures memory pool of each process it runs, and you can see the readings in the Task Manager.withanh wrote:Does Veeam use up all the available memory even though it doesn't really show that in Task Manager?
Software can sometime "leak" the memory because of bugs (keep requesting memory, but not releasing it). In this case, you will see the memory usage of specific processed keep inflating. This was why I asked for Task Manager processes screenshot ordered by memory usage.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
It is a conspiracy!
I'll take some screenshots when Veeam is finished later today and shows low memory usage, then tonight when Veeam is cranking on backup jobs with memory maxed out.
I'll take some screenshots when Veeam is finished later today and shows low memory usage, then tonight when Veeam is cranking on backup jobs with memory maxed out.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
What will use up all physical memory is the system cache (see "Cached" memory). This is how Windows memory management works, and it makes perfect sense - why not put all available physical memory the system has to use at all times? But of course, as soon as OS needs more memory to give to a process, it just takes the memory from the system cache. So it is perfectly normal to have no "Free" memory on a Windows computer that has been running for a while - no matter how much physical RAM you throw at it.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
Understood, but this machine was only booted a week ago. Also, last night with 4 Veeam jobs running concurrently the memory usage was at 99%, now with two jobs running it is at 50%. In fact, in the very first screenshot (I wish I had grabbed all of Task Mgr in it) the memory was at 99%.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
You can fill up your system cache in less than a minute after reboot, just do some I/O (copy different files back and forth) and Windows will cache them.withanh wrote:Understood, but this machine was only booted a week ago.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
OK, I can get on board with that, but can you then explain why the memory was at 99%, then after Veeam jobs finished its at 50%?
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
Looks like the forum is trimming off the right edge of the screenshots. If you click on the screenshot it will open the Flickr image and you can see the memory %. This is true for the first screenshot as well where it shows 99% memory used up.
h
*Edited the screenshot link so it shows the whole picture*
h
*Edited the screenshot link so it shows the whole picture*
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
I remain convinced it is related somehow to Veeam. I just launched a backup job and the memory usage climbed way up again.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
Of course it is related. Starting each backup spawn additional Veeam agent processes, and then need memory (you can see them in the Task Manager). Additionally, heavy write I/O further increases the memory pressure on the system (cached writes), which in turn increases "Cached" memory and decreases "Available" memory comparing to when no jobs are running.
But again, the interesting part is the Processes tab sorted by memory usage, to see the real memory consumption numbers by processes. Other metrics mean very little, they are just giving you an idea of how efficiently Windows is putting all physical memory available on the system to use... and nothing more.
But again, the interesting part is the Processes tab sorted by memory usage, to see the real memory consumption numbers by processes. Other metrics mean very little, they are just giving you an idea of how efficiently Windows is putting all physical memory available on the system to use... and nothing more.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
Sure, I saw that, and the Veeam processes show that they're using ~200MiB to ~300MiB per process. So you're saying all the other used up memory is in the cache?
Do you know of any tools/utilities that can show me how much RAM the cache is using? Or even more importantly control it so the OS still has some room to breathe and doesn't get crippled by the cache chewing up all the available RAM.
Do you know of any tools/utilities that can show me how much RAM the cache is using? Or even more importantly control it so the OS still has some room to breathe and doesn't get crippled by the cache chewing up all the available RAM.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Memory performance
The memory displayed by default in the Windows process list is the "Private Working Set" for the process, which is not very useful in determining the amount of memory a process is actually using since this does not include any "shared memory", which mostly consist of memory mapped I/O, but can vary from application to application (for example, when using virtualization products like VMware Workstation or Virtualbox, the VM's themselves normally live in shared memory).
To monitor memory usage of Veeam, I would suggest adding the "Working Set (Memory)" and "Peak Working Set (Memory)" columns to your task manager. The "Working Set" will include memory that is currently being used for memory mapped I/O. The "Peak Working Set" will typically be much higher. This peak is usually hit at the very start of the vStorage API process.
To monitor memory usage of Veeam, I would suggest adding the "Working Set (Memory)" and "Peak Working Set (Memory)" columns to your task manager. The "Working Set" will include memory that is currently being used for memory mapped I/O. The "Peak Working Set" will typically be much higher. This peak is usually hit at the very start of the vStorage API process.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
Thanks Tom, that's a great suggestion. Is the Peak Working Set just that (a peak) or does it ebb & flow? And to calculate total memory used would I add Working Set & Private Working Set?
Guess I'll need to do some Googling on this
h
Guess I'll need to do some Googling on this
h
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Memory performance
Bring up perfmon and monitor the following:withanh wrote: Do you know of any tools/utilities that can show me how much RAM the cache is using? Or even more importantly control it so the OS still has some room to breathe and doesn't get crippled by the cache chewing up all the available RAM.
Cache\Dirty Pages
Cache\Dirty Page Threshold
Cache\Lazy Write Pages/sec
This will help show if the problem is simply writing more data than your target can keep up with. If you are writing data faster than your target can accept, Windows will cache these "Dirty pages" in memory up to the "Dirty Page Threshold" at which point it will block writes. I've seen issues on large memory machines where the "Dirty Page Threshold" is simply too large. I think the default is "half of system memory", maybe larger. There is a registry entry (SystemCacheDirtypagethreshold) that can limit it (there was a time when this was almost required on Windows 2003 server with large amounts of memory as simply copying a large file could bring the server to it's knees), but don't know if it still applies to Windows 2008 and newer.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Memory performance
BTW, does this happen to be Windows 2008 (as opposed to Windows 2008R2)? I know Windows 2008 64-bit suffered from some of the same issues. Heck, Microsoft even has a service to help address it:
http://www.microsoft.com/download/en/de ... en&id=9258
Microsoft originally claimed that, due to changes in Windows 7/2008R2 memory mangement, this service would no longer be required, however, they've recently (August 2011) updated their statement to say there are still cases where this might be needed for Windows 2008R2 RTM/SP1, but I don't think they've made the updated tool available (it says contact Microsoft support).
http://blogs.technet.com/b/yongrhee/arc ... rvice.aspx
http://www.microsoft.com/download/en/de ... en&id=9258
Microsoft originally claimed that, due to changes in Windows 7/2008R2 memory mangement, this service would no longer be required, however, they've recently (August 2011) updated their statement to say there are still cases where this might be needed for Windows 2008R2 RTM/SP1, but I don't think they've made the updated tool available (it says contact Microsoft support).
http://blogs.technet.com/b/yongrhee/arc ... rvice.aspx
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
It is 2008R2. I'll read the TechNet blog.
Thanks again!!
Thanks again!!
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Memory performance
I thought that it was based on the screenshot, but just to be sure. You should probably monitor the parameters and see if that's the problem first, but I'm guessing that it is. The 64-bit versions of Windows have been nortorious for this issue as long as they have existed. Basically they will simply eat all of your memory for read/write cache, even to the detriment of actual system I/O. This has been far less prevalent with 2008R2, but I found several software companies that claimed back as far as late 2010 that it was still an issue and linking to 3rd party tools, and now Microsoft itself has admitted that it can still be an issue under "certain circumstances".
My guess is they probably did something dumb/easy, like limit the read cache on a "per-process" basis, which would have addressed the most common issue (you could literally kill a server/Vista machine just by copying a single, very large file, especially if copying from a fast disk to a slower disk). If they limited it "per-process" then having multiple processes streaming I/O could still be an issue. It's just a guess at this point, but gives you something to go on.
Also, I didn't answer your question regarding peak working set. The "Peak Working Set" is the absolute maximum amount of memory that the process has used at any one point in time. Typically for Veeam we see this during the call to vStorage API to "mount" the volume and usually returns quickly to a fairly steady state. The "Working set" should be the total amount of memory that the process is using at that time. The "Private working set" is a subset of the "Working Set", as it's the part that is "unique" to that particular process while the working set includes memory that could potentially be shared with other processes.
It's actually very difficult to determine the actual amount of memory that a process is using, because, if two programs use the same DLL's and same API's, the memory is shared between then. That's why Windows defaults to displaying the "Private Working Set" because it's the only memory that's truly dediated to that process. But just because memory "can" be shared doesn't mean that another process is actually using it, and in the case of memory mapped I/O, it's rarely shared, although it could be.
So if you have two processes that show as follows:
How much memory would these processes actually be using? Well, obviously the private working set is 139MB, but what about the total working set? Without digging into the processes and determining the amount that is actually shared, is shareable but unique, and is memory mapped I/O, it's impossible to figure out. A reasonable percentage of it is probably .dll files that are already loaded in Windows memory, so that really would be shared, and at least some of the .dll files are probably shared between the two processes so it's not fair to account for them twice, but then some of it is likely to be mapped I/O unique to the process.
When it's all said and done, I seriously doubt that the Veeam processes are directly the cause of your memory issues, I just wanted to point out that you really have to dig into Windows pretty hard core to figure out how much memory is actually being used, and the numbers displayed by default in task manger do not include "shared" memory, so there's a lot of places for used memory to hide.
My guess is they probably did something dumb/easy, like limit the read cache on a "per-process" basis, which would have addressed the most common issue (you could literally kill a server/Vista machine just by copying a single, very large file, especially if copying from a fast disk to a slower disk). If they limited it "per-process" then having multiple processes streaming I/O could still be an issue. It's just a guess at this point, but gives you something to go on.
Also, I didn't answer your question regarding peak working set. The "Peak Working Set" is the absolute maximum amount of memory that the process has used at any one point in time. Typically for Veeam we see this during the call to vStorage API to "mount" the volume and usually returns quickly to a fairly steady state. The "Working set" should be the total amount of memory that the process is using at that time. The "Private working set" is a subset of the "Working Set", as it's the part that is "unique" to that particular process while the working set includes memory that could potentially be shared with other processes.
It's actually very difficult to determine the actual amount of memory that a process is using, because, if two programs use the same DLL's and same API's, the memory is shared between then. That's why Windows defaults to displaying the "Private Working Set" because it's the only memory that's truly dediated to that process. But just because memory "can" be shared doesn't mean that another process is actually using it, and in the case of memory mapped I/O, it's rarely shared, although it could be.
So if you have two processes that show as follows:
Code: Select all
Name Working Set Private Working Set
aprocess.exe 143MB 76MB
aprocess.exe 128MB 63MB
When it's all said and done, I seriously doubt that the Veeam processes are directly the cause of your memory issues, I just wanted to point out that you really have to dig into Windows pretty hard core to figure out how much memory is actually being used, and the numbers displayed by default in task manger do not include "shared" memory, so there's a lot of places for used memory to hide.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
Looks like the Dirty Pages/sec is the same as the threshold. These graphs are hard to see because they just started building but you can see at the bottom of each graph that the Threshold is 393k and the Dirty Pages/s is 392.9k. Lazy writes are at 11k.
You can click the image to get the full size from Flickr.
You can click the image to get the full size from Flickr.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
This information is displayed right there on the Performance tab in the Task Manager:withanh wrote:Do you know of any tools/utilities that can show me how much RAM the cache is using?
Cached = current size of system cache
Available = part of system cache that can be immediately freed up (aka "standby list" of memory pages)
Cached minus Available = pending writes, this data must be written to disk before memory can be freed up (aka "modified list" of memory pages)
On the last screenshot on the previous page (with job running), you have 6GB of pending writes sitting in cache (10GB - 4GB).
And on the previous screenshot (with job not running), no pending writes at all (Cached = Available - Free, meaning cache only contains standby pages).
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
True, but that doesn't give any indication of what is filling up the cache, nor are there any provisions for controlling how large the cache can grow.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
The data that the backup job writes to disk is what fills up the system cache.
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
That's what I assume as well. Unfortunately, my boss wants facts and while we can sit here all day and say that this must be what's filling up the cache, I have to be able to show him (aka prove to him) that it is Veeam that's filling up the cache, and I have to be able to control it so it doesn't chew up all the system RAM and bring the server to its knees.
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Memory performance
In theory, this should be easy to prove. Create large VMDK, stuff it completely with some recognizable pattern (for example, FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF), create new VM and attach VMDK to VM, create job with that VM, disable compression and dedupe for the job, start job, wait some time for cache to grow, then create complete memory dump and see what is in that memory. Just don't ask me for details on how to actually do all this (as I have no idea)
-
- Veteran
- Posts: 262
- Liked: never
- Joined: Jul 21, 2009 3:19 pm
- Full Name: Darhl
- Location: Pacific Northwest
- Contact:
Re: Memory performance
I hear ya Anton! I think Tom's suggestions are a good start. I may just have to disappoint my boss :-p
For every expert there is an equal and opposite expert - Arthur C Clarke's Fourth Law
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Memory performance
Well, I don't understand what exactly you are wanting to "prove". It's quite easy to "prove" that the Veeam processes are not using the memory, the monitoring above shows that 1.5GB of data is being used by the "lazy write" cache, and that's when running just a single job. If you want to monitor the entire "cache" on Windows then you need to monitor the following under the "Memory" section in PerfMon:
Cache Bytes
Modified Page List Bytes
Standby Cache Core Bytes
Standby Cache Normal Priority Bytes
Standby Cache Reserve Bytes
In general, adding all of these numbers together will give the same value as "Physical Memmory (MB) - Cached" in Task Manager.
But in your case I think it is pretty obvious that what is happening is Windows is caching far too agressively. Look at your screenshots, you can see places where it is showing 10GB of cache. That's not a system under memory pressure, that's a system with a lot of blocks that need to be flushed.
There is a hotfix available for Windows 2008R2 that might be useful for this case.
http://support.microsoft.com/default.as ... -US;979149
The note implies it is only for "large" applications that flush a lot of data at once, but, while Veeam itself is small, we do read and write a large amount of data. Might be worth a try.
Other than that, I'd suggest contacting Microsoft to get the updated Dynamic Cache Service for Windows 2008R2 since it's their own blog that indicates that this might actually be useful after claiming for a year that it wouldn't be.
There are also some 3rd party tools which use the same API to set the min/max cache size. I have no idea if they work at all, but a search for SetSystemFileCacheSize will turn up documentation on the API, as well as a few tools that claim to be able to set it from the command line. These tools may very well eat your system for all I know, so good luck if you decide to use them.
Cache Bytes
Modified Page List Bytes
Standby Cache Core Bytes
Standby Cache Normal Priority Bytes
Standby Cache Reserve Bytes
In general, adding all of these numbers together will give the same value as "Physical Memmory (MB) - Cached" in Task Manager.
But in your case I think it is pretty obvious that what is happening is Windows is caching far too agressively. Look at your screenshots, you can see places where it is showing 10GB of cache. That's not a system under memory pressure, that's a system with a lot of blocks that need to be flushed.
There is a hotfix available for Windows 2008R2 that might be useful for this case.
http://support.microsoft.com/default.as ... -US;979149
The note implies it is only for "large" applications that flush a lot of data at once, but, while Veeam itself is small, we do read and write a large amount of data. Might be worth a try.
Other than that, I'd suggest contacting Microsoft to get the updated Dynamic Cache Service for Windows 2008R2 since it's their own blog that indicates that this might actually be useful after claiming for a year that it wouldn't be.
There are also some 3rd party tools which use the same API to set the min/max cache size. I have no idea if they work at all, but a search for SetSystemFileCacheSize will turn up documentation on the API, as well as a few tools that claim to be able to set it from the command line. These tools may very well eat your system for all I know, so good luck if you decide to use them.
Who is online
Users browsing this forum: MarvinMichalski, Semrush [Bot] and 91 guests