Host-based backup of VMware vSphere VMs.
Post Reply
bdoe
Enthusiast
Posts: 85
Liked: 14 times
Joined: Oct 09, 2014 7:48 pm
Full Name: Bryan
Contact:

Large virtual file servers - spanned disks vs. large vmdk

Post by bdoe »

At the moment, I have three large file servers (each about 20TB). Two pre-date ESXi 5.5 and used spanned volumes. The third was setup under 5.5 and uses 4 x 5TB or similar. What is the recommended approach under 5.5+ with regards to Veeam? I've heard that parallel processing made spans better, but assuming each job is one at a time, can it throw all of its power behind a single 20TB .vmdk instead?
veremin
Product Manager
Posts: 20415
Liked: 2302 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by veremin »

One virtual disk cannot be processed by multiple proxy servers at the same time, if that's what you're asking. In contrast, multiple virtual disks can be spread among available proxy servers and be processed simultaneously. Thanks.
bdoe
Enthusiast
Posts: 85
Liked: 14 times
Joined: Oct 09, 2014 7:48 pm
Full Name: Bryan
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by bdoe »

I was in a rush yesterday, sorry, should have said more. There is only one Veeam server, no proxies. It is connected via 10GbE and has 16 cores (32 threads), 32GB RAM, and fast storage. With just that single powerful server, is there any difference between a single large .vmdk or smaller ones used as a Windows span? The span approach works great, and Veeam handles it well, but I know most people aren't fond of it.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by PTide »

There is only one Veeam server, no proxies
If there are no proxies configured then veeam server takes the role of a proxy. The number of disks that can be processed concurently is defined by "max concurrent tasks" variable in proxy settings (1 disk = 1 task). So you can have multiple disks processed by a single proxy, but you should keep in mind that your proxy has to have enough resources (CPU, RAM) in order to successfully process disks in parallel. Also there is a possibility that your production storage might become a bottleneck due to inability to server that much reads.
bdoe
Enthusiast
Posts: 85
Liked: 14 times
Joined: Oct 09, 2014 7:48 pm
Full Name: Bryan
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by bdoe »

Yes, the Veeam server is the only proxy (sorry, I meant no other proxies), it's set to handle 14 tasks. It has been able to back up each 20TB file server in about 14 hours each, and I only run them one at a time. According to PRTG, a core or two will get to around 50% during a job but it's never been anywhere near full load.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by PTide »

For backup server itself you need 2GB + 4GB for embedded proxy server. Assume you have 4 x 5 TB jobs. That would require 200MB x 4 + 500MB x 4 ~ 3 GB extra RAM. Add extra GBs for Windows. So in total you'll need around 12 GB RAM to keep your VBR + proxy machine with 4 x 5 TB jobs afloat. Parallel processing of 4 vmdks should happen faster than processing of a single large vmdk. Unfortunately you cannot speed up processing of a single virtual disk by adding cores.
bdoe
Enthusiast
Posts: 85
Liked: 14 times
Joined: Oct 09, 2014 7:48 pm
Full Name: Bryan
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by bdoe »

No worries there, it well exceeds those requirements. I'm mostly wondering what the recommendation from Veeam and others users is anymore. If Veeam will happily handle a single 20TB .vmdk, and can do it as fast as parallel jobs, then I'll go with that. If a spanned volume is better, then I can keep doing that.
foggy
Veeam Software
Posts: 21139
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by foggy »

Spanned disks will add more parallelism into the picture, so if your proxy is ok with more tasks, you will get the jobs done faster.
readie
Expert
Posts: 158
Liked: 30 times
Joined: Dec 05, 2010 9:29 am
Full Name: Bob Eadie
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by readie »

I've discovered this to our cost, in that I've replaced a large 4TB Fileserver which was 8 x 500GB VMDKs spanned in Windows to a single volume, to a new server with a single 4TB volume. Backup is now much slower - as you say it is not benefitting from the multiple Proxies we have.
So my question is, does anyone know of a method of splitting a large 4TB vmdk=single disk in Windows, into multiple smaller vmdks= spanned disk in Windows?? Do I have to use some magic partitioning software in Windows, as well as gradually providing more small vmdks via VSphere?
Bob Eadie
Computer Manager at Bedford School, UK (since 1999).
Veeam user since 2009.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by PTide »

Hi,
does anyone know of a method of splitting a large 4TB vmdk=single disk in Windows, into multiple smaller vmdks= spanned disk in Windows??
Unfortunately, the only way to do that is to create another spanned 4Tb drive and copy all data from a single 4Tb drive.

Thank you.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by Vitaliy S. »

readie wrote:Backup is now much slower - as you say it is not benefitting from the multiple Proxies we have.
Do you see the same bottleneck stats as before?
readie
Expert
Posts: 158
Liked: 30 times
Joined: Dec 05, 2010 9:29 am
Full Name: Bob Eadie
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by readie »

Not sure - but I have two jobs (one backup, and one replicate from that backup) of our two large fileservers. They both are getting from the same source and same target. The one that is still multiple vmdks takes considerably less time, as I can see two or three proxies working on its 8 vmdks at the same time. Replica I'm waiting for now - multiple vmdks (which is actually slightly more TB) took 1Hr 29min and the other single vmdk has been going for 4Hr 29 mins and is just 53% through. Approx one is 8 x 512GB the other 1 x 3.5TB (plus some smaller bits like the system drive).
Bob Eadie
Computer Manager at Bedford School, UK (since 1999).
Veeam user since 2009.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by PTide »

The one that is still multiple vmdks takes considerably less time
As you've correctly pointed before - that can be explained by inability of multiple proxies to perform parallel processing on a single .vmdk. Please post you bottleneck statistics so we can make sure that it's proxy what makes the backup process slower.

Thank you.
readie
Expert
Posts: 158
Liked: 30 times
Joined: Dec 05, 2010 9:29 am
Full Name: Bob Eadie
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by readie »

I think I'll wait a day or two for things to settle, as I've just moved to Scale-Out Repository . . . and I have one job (not a full backup!) which has now been going on for over 24 hours. I have opened a support ticket, as I cannot see what has slowed this down so much. Case 01694415 . . . will reply to this thread once I've sorted out the current problem.
Bob Eadie
Computer Manager at Bedford School, UK (since 1999).
Veeam user since 2009.
gingerdazza
Expert
Posts: 206
Liked: 14 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

[MERGED] Large VM design considerations

Post by gingerdazza »

Would appreciate people's thoughts on considerations for large multi-TB VMs. (~5TB each). Is it worth spanning volumes across VMDKs for Veeam throughput? Or does this create other challenges? (higher chance of file data corruption on the spanned NTFS volume? FLR issues? and alike?)

Thanks
DGrinev
Veteran
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Large VM design considerations

Post by DGrinev »

Hi Dazza,

Please review this existing discussion, also, if you will have additional questions, don't hesitate to ask. Thanks!
gingerdazza
Expert
Posts: 206
Liked: 14 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by gingerdazza »

Thanks DGrinev

So, architecturally I fully understand how the parallel processing of spanned VMDKs increases backup speeds. But are there any other major considerations with this method?... for instance does the use of spanned volumes affect Veeam restore functionality (like the old FLR problem that I think used to exist); or does it potentially create problems with the NTFS file system (corruption)? etc?
DGrinev
Veteran
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Large virtual file servers - spanned disks vs. large vmd

Post by DGrinev »

There are no major considerations from the top of my head, as I have seen multiple reports of successfully using spanned disks. Thanks!
aceit
Enthusiast
Posts: 31
Liked: 14 times
Joined: Jun 20, 2017 3:17 pm
Contact:

Re: [MERGED] Large VM design considerations

Post by aceit »

gingerdazza wrote:Would appreciate people's thoughts on considerations for large multi-TB VMs. (~5TB each). Is it worth spanning volumes across VMDKs for Veeam throughput? Or does this create other challenges? (higher chance of file data corruption on the spanned NTFS volume? FLR issues? and alike?)
Personally I usually don't like to solve this "volume manager" tasks inside the OS stack but instead I prefer to push the problem inside the disk array / SAN (that is its primary work), namely I prefer to present to the server a single big LUN (then backed by the particular external array configuration, that can span different controller disk as required, dynamically).

Still I don't think there is particual problems in handling using multi VMDK and spanning/binding them with different OS based solutions (storage spaces, normal volume manager etc.etc.), it should be fine if requested, all depends a lot on the particular hardware configuration and design... it is good to have flexibility... each case is different (ie if the different vmdks end up sharing the same spindles and controller I don't this would improve much, due to the underlying contention and bottleneck).
Post Reply

Who is online

Users browsing this forum: No registered users and 23 guests