gingerdazza wrote:Would appreciate people's thoughts on considerations for large multi-TB VMs. (~5TB each). Is it worth spanning volumes across VMDKs for Veeam throughput? Or does this create other challenges? (higher chance of file data corruption on the spanned NTFS volume? FLR issues? and alike?)
Personally I usually don't like to solve this "volume manager" tasks inside the OS stack but instead I prefer to push the problem inside the disk array / SAN (that is its primary work), namely I prefer to present to the server a single big LUN (then backed by the particular external array configuration, that can span different controller disk as required, dynamically).
Still I don't think there is particual problems in handling using multi VMDK and spanning/binding them with different OS based solutions (storage spaces, normal volume manager etc.etc.), it should be fine if requested, all depends a lot on the particular hardware configuration and design... it is good to have flexibility... each case is different (ie if the different vmdks end up sharing the same spindles and controller I don't this would improve much, due to the underlying contention and bottleneck).