
Our main Veeam repo is a SOBR consisting of 5 similar-sizes volumes (optimized for ReFS “allowed” volume size back from the time it mattered more), hosted by FC backend storage systems. The repo server was a windows server core 20H2 system. Since this OS is long out of support, we planned to migrate this to a Windows 2022 server (and a new hardware).
The storages are Fibre Channel attached (some support storage snapshots, some not), which means simply moving the Repo to another server is easy in theory, but since we are quite paranoid, we searched for a way to do this nearly risk-free.
We asked Veeam support and the official way of doing this is:
- Put the whole SOBR in Maintenance Mode
- Move backend storages to the new 2022 server
- Mount the volumes in Windows Server (causing ReFS Metadata upgrade)
- Create new Repos for the old volumes/paths
- Create a new SOBR, put in all the new Repos
- Rescan
This procedure has a big drawback for: with 2022 there is no way back if Windows Server trashes all the ReFS volumes because it upgraded the ReFS metadata – which would mean active full for 4700 VM on that SOBR. So we search for a way to do this for a subset of volumes.
We hatched our own plan based on kb3100 (sanctioned by Veeam support):
- Put only two of the volumes/repos of the SOBR into maintenance mode (those whose backend storage supports snapshots)
- Create a storage snapshot for these volumes
- Move these volumes to the new server
- Mount the two volumes in Windows Server (causing ReFS Metadata upgrade)
- Add the two Repos to the old SOBR
- Rescan the SOBR
- If all goes well add the other volumes/repos the same way
- If all goes bad, restore the storage snapshot and reattach to the old server
This procedure worked great in the end. But we did not factor in that the ReFS metadadata upgrade is the scariest part in the whole procedure.
As soon as we took the first two volumes online (which are the newest in the SOBR) and the drive letters showed up disk management and windows explorer froze completely! It stayed this way for nearly an hour, after that all came back as if nothing was ever wrong. We created the Repos, rescanned, tested our backups, and all was fine.
So, we now know that ~500 TB of storage takes an hour to go online and were confident that migrating the remaining 600 TB (which have more spindles and are generally faster) will take also about an hour. But it was not that simple. We still do not know why (fragmentation perhaps?) but Metadata upgrade of the remaining volumes took more than 4 hours to complete!
While we were waiting, we searched for logs and checked performance data on the backend storage – sadly, there is just nothing telling you “ReFS is doing something currently, stay calm, don’t turn off the server”, no log, low CPU and only minimal IO on the backend. We talked to ReFS devs (I am extremely grateful they still answer a ReFS question every now and then – they are just great guys!) and they confirmed this to be normal. With the move to Server 2022 “container btable format“ is upgraded at first mount which seems to be a sequential/single threaded and blocking process!
By the way our XFS is even bigger – we have never seen something like this there…
So - stay calm, don't reboot in the middle of the process (and perhaps use XFS or Object in the long term

Markus