thomas.raabo wrote:That will not work! contact MS and get them to help you.
Mike Resseler wrote:Hi James,
First: Welcome to the forums
Second: The issue that is discussed here is that ReFS becomes very unstable if there is a lot of activity on it and the size large. Not being able to boot it is not something I have heard off with this issue. Something might be related but I am not sure. Please keep working with MSFT support for now and keep us posted. Who knows this is a new problem with ReFS (I hope not though)
GarethUK wrote:James is indeed correct. This is behaviour I have observed. We have 16 backup repo servers 5 of which are 70TB REFS enabled Windows 2016 servers.
Gostev wrote:From what I know based on the conversation with ReFS devs, it may be possible to work around this particular bug around huge volumes by adding lots of RAM to the backup repository server. If you can't do this, then I'm afraid the only option is to fall back to NTFS until Microsoft ships that patch.
So, with 240TB of disk, full at say 80%, makes it 192GB, so you may need to plan to have 192GB of Memory. It sounds a bit like an overkill, but this has proved to be a good solution.
Gostev wrote:A confirmation from the field, we have observed many stable ReFS installations where the memory was from 512MB to (more likely) 1GB for each TB of backups. So, with 240TB of disk, full at say 80%, makes it 192GB, so you may need to plan to have 192GB of Memory. It sounds a bit like an overkill, but this has proved to be a good solution. Not verified nor confirmed by anyone at Microsoft, but from many working deployments we have observed.
jamesmay wrote:Just to confirm, you've seen complete hangs shortly after / during boot?
For some customers (us included), NTFS is not sufficient for our needs at the moment, and keeping the status quo brings with it as many issues as the ones currently reported with ReFS.
Some of the posts make an assumption that people are migrating from a position of stability and high performance, to something much worse. The opposite can be true.
NTFS is sufficient on four out of five of our five Veeam repositories, but the nature of the VMs being backed up on the fifth, means multiple synthetic/active fulls break the storage capacity for our current RPO strategy, and merge jobs break the backup window.
ReFS/Fast Clone - provides a solution for both of these, and if an Active Full is required occasionally to reset the performance - so be it. It's no worse than our current position, in fact much, much better.
Gostev wrote:Iain, absolutely - unless Microsoft ships the fix in the currently planned timelines, we will of course remove this suggestion in the next update.
Although it's not really fair to say it is completely no go for everyone, because it works well for many smaller customers, which for historical reasons B&R has a lot... the issues are quite isolated to bigger ReFS volumes and big backup files. In general, such scalability problems are pretty usual for any new technology. B&R had its own back when it was at v3, and just like ReFS today we too were usable for small customers only.
suprnova wrote:It's interesting it works for some small customers. I have a 10TB repository that backs up one VM (incrementals around 30GB, full backup around 5TB). As soon as the fast clone merge starts, the repository drives drops offline often causing a merge to take days. Even copying a file off this repository the speed goes from 0 to 1MBps every other second.