we run into issue by compacting large full backup file (more than 20TB). Compact task failed with error like "Unable to allocate memory for storage metadata bank". The only one event in windows log at this time is the warning about virtual memory getting low. Virtual memory is controlled by the OS. So it should actually be auto increased. First I thought that OS increase virtual memory and only the space on drive C is the limitation of such increase.
But then I found the article with the hardcode limitation being 3 x RAM or 4 GB, whichever is largerhttps://support.microsoft.com/en-us/hel ... of-windows
From this perspective I think If repository has 16GB RAM, virtual memory is increased during the compact job up to 48GB. If compact job is still not finished, OS is unable to increase the virtual memory and task is failed with the error "Unable to allocate memory for storage metadata bank".
The obvious workaround for this problem - assigning more RAM or manual configure virtual memory limits. But the question I have - how large should be virtual memory to compact 20TB of data? I wound experiment with size because such job takes days. So if someone has experience or know how I can calculate virtual memory needed for such task, would be great.