Looking at the v8 best practice book (and other articles), it is suggested that the source WAN accelerator of a backup copy job benefits from the fastest IO possible (so in my case SSD), since it does a fair amount of uncompressing and re-compressing on the fly while creating/storing the digests. The amount of space used is about 2% of the VM source files, and this space can't be reserved in the WAN accelerator setting.
It also says that the destination WAN accelerator can also be high IO but generally spinning disks are fine on a one-to-one pairing (as opposed to many-to-one configuration). This disk space is reserved and uses the setting you specify in the WAN accelerator setup (called the Global Cache), and I believe this will benefit from larger reservations to store more cache (although it's not stated where the diminishing returns are likely to kick in, and my discussions with Chris D suggested that only the operating system drives are cached in v8 anyway). Since the cache pre-population feature finds 10 different OSes in my backups, I've chosen 400GB of global cache space.
So when setting this up in one direction between the servers, it's easy to select the correct paths:
One-Way Copy Job
Source Server - Digests: WAN Accelerator on C:\VeeamWAN (SSD)
Destination Server - Global Cache: WAN Accelerator on D:\VeeamWAN (RAID 6 SATA)
However if you also need to run backup copy jobs in the reverse direction, the source and destination WAN acceleration settings are using the wrong paths, since you can only have 1 WAN accelerator per server with one path/cache configuration.
Reverse copy job (equals bad configuration, which will fill up the SSD to 0 bytes remaining)
Source Server - Digests: WAN Accelerator on D:\VeeamWAN (RAID 6 SATA)
Destination Server - Global Cache: WAN Accelerator on C:\VeeamWAN (SSD)
So in order to keep the digests and global cache separate to suit the two-way nature of the backup copies, I ended up doing the following:
Modified Settings for two-way Copy Jobs
Source Server: WAN Accelerator on C:\VeeamWAN (SSD).
Code: Select all
mklink /J C:\VeeamWAN\GlobalCache D:\VeeamWAN\GlobalCache
Code: Select all
mklink /J C:\VeeamWAN\GlobalCache D:\VeeamWAN\GlobalCache
I have now kicked off a pre-population of the cache on both servers, and can immediately see the 400GB cache on the D drive, while the C drive is left with a comfortable amount of working space for Digests and Send/Recv folders.
I'll update with further confirmation that this works ok during the backup copies themselves, but hopefully this might help someone else in my situation. Originally I thought the WAN acceleration feature just used a static amount of space defined in the config, so I sized the SSDs accordingly. However this is not the case, only the Global Cache size is fixed, and the overhead from the Digest processing can be anything up to 2% of the source VMs + 10GB.