Discussions specific to the VMware vSphere hypervisor
DGrinev
Veeam Software
Posts: 1264
Liked: 135 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Large VM design considerations

Post by DGrinev » Aug 18, 2017 5:02 pm

Hi Dazza,

Please review this existing discussion, also, if you will have additional questions, don't hesitate to ask. Thanks!

gingerdazza
Expert
Posts: 127
Liked: 12 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by gingerdazza » Aug 21, 2017 8:04 am

Thanks DGrinev

So, architecturally I fully understand how the parallel processing of spanned VMDKs increases backup speeds. But are there any other major considerations with this method?... for instance does the use of spanned volumes affect Veeam restore functionality (like the old FLR problem that I think used to exist); or does it potentially create problems with the NTFS file system (corruption)? etc?

DGrinev
Veeam Software
Posts: 1264
Liked: 135 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by DGrinev » Aug 22, 2017 4:18 pm

There are no major considerations from the top of my head, as I have seen multiple reports of successfully using spanned disks. Thanks!

aceit
Influencer
Posts: 22
Liked: 8 times
Joined: Jun 20, 2017 3:17 pm
Contact:

Re: [MERGED] Large VM design considerations

Post by aceit » Aug 23, 2017 5:33 pm

gingerdazza wrote:Would appreciate people's thoughts on considerations for large multi-TB VMs. (~5TB each). Is it worth spanning volumes across VMDKs for Veeam throughput? Or does this create other challenges? (higher chance of file data corruption on the spanned NTFS volume? FLR issues? and alike?)
Personally I usually don't like to solve this "volume manager" tasks inside the OS stack but instead I prefer to push the problem inside the disk array / SAN (that is its primary work), namely I prefer to present to the server a single big LUN (then backed by the particular external array configuration, that can span different controller disk as required, dynamically).

Still I don't think there is particual problems in handling using multi VMDK and spanning/binding them with different OS based solutions (storage spaces, normal volume manager etc.etc.), it should be fine if requested, all depends a lot on the particular hardware configuration and design... it is good to have flexibility... each case is different (ie if the different vmdks end up sharing the same spindles and controller I don't this would improve much, due to the underlying contention and bottleneck).

Post Reply

Who is online

Users browsing this forum: Google [Bot], jcapone, JLundgren, seb002 and 28 guests