Discussions specific to the VMware vSphere hypervisor
Post Reply
jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Moving fileserver datadisks to vmware

Post by jja » Mar 07, 2014 7:28 am

Hi!

Today we run our fileserver as a virtual server with direct iscsi connections to a dedicated san for shared data.
We are considering configuring a new 2012 R2 server and replicating all the data to vmdk files instead.

As of today there is about 50TB of data needing backup.
Are there any drawbacks of doing this regarding backup by veeam, as just 1 vm will be 50TB+?
We will create a single job for this vm due to the size.

We have 1 Veeam server that runs all the backups, and stores everything on a 2012 R2 server with Veeam components installed.
The Veeam server has direct iscsi connections to all the vmware iscsi luns.
Are there any best practices we should implement?

-J

dellock6
Veeam Software
Posts: 5734
Liked: 1625 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Moving fileserver datadisks to vmware

Post by dellock6 » Mar 07, 2014 11:14 am

Hi Jannis,
one suggestion I can give you is, if possible, to split those 50TB in multiple vmdk disks.
Based on how our parallel processing works, you can spread the load of saving different vmdks of the same VM to different Veeam proxies in parallel, while a single huge vmdk can only be processed sequentially by one process.
In this way, you can greatly increase processing speed.

Other then that, there are no issues in saving such a large VMs, we have several feedbacks from customers saving very large VMs.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

Vitaliy S.
Product Manager
Posts: 22987
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Moving fileserver datadisks to vmware

Post by Vitaliy S. » Mar 09, 2014 3:02 pm

I would agree with Luca as splitting data into multiple virtual disks would allow you to backup this VM faster (at least during the initial full job run).

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Moving fileserver datadisks to vmware

Post by jja » Mar 10, 2014 6:49 am

I forgot to mention that we will (as it is today), split the data over several drives (or vmdk's).
We will probably add 10 10TB vmdk's (some might be 12TB due to large datasets).

Going to look into adding some proxies, guessing 1 vm proxy dedicated to each physical host should be a good start?

Thanks! Looking forward to getting the fileserver fully virtualized :)

-J

dellock6
Veeam Software
Posts: 5734
Liked: 1625 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Moving fileserver datadisks to vmware

Post by dellock6 » Mar 10, 2014 11:07 am

We do not have specific best practices about a 1:1 relationship between virtual proxies and underlying ESXi servers, unless you are using an NFS storage; in this case VMware has problems regarding NFS locking while hotadding VMDK via network (http://kb.vmware.com/selfservice/micros ... Id=2010953). We have a new registry key in our latest patch #3 that can help you prevent this:

HKLM\SOFTWARE\VeeaM\Veeam Backup and Replication\EnableSameHostHotaddMode (DWORD) : Intelligent load balancing can now be configured to give preference to backup proxy located on the same host

Create the DWORD key there and set it to 1.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

Vitaliy S.
Product Manager
Posts: 22987
Liked: 1556 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Moving fileserver datadisks to vmware

Post by Vitaliy S. » Mar 10, 2014 11:54 am

jja wrote:Going to look into adding some proxies, guessing 1 vm proxy dedicated to each physical host should be a good start?
Depends on your setup and configuration. I see that you're using SAN storage, so I would recommend going with physical proxy server configured to work in direct SAN mode, should give you good job performance rates.

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Moving fileserver datadisks to vmware

Post by jja » Mar 11, 2014 6:25 am

Vitaliy S. wrote: Depends on your setup and configuration. I see that you're using SAN storage, so I would recommend going with physical proxy server configured to work in direct SAN mode, should give you good job performance rates.
Our veeam backup server has direct san access, and for now I'm just testing to see if we can increase the performance without buying more hardware :)
Changed our backup jobs today from chaining to starting 20:00, 20:01, 20:02 and so forth.
If I understand it correctly, this will let the jobs wait for available resources and not "jam" all resources.
Interesting to see how it works :)

This is why I'd like to add some proxies, would be nice to offload and lower our backup window even more :)

-j

foggy
Veeam Software
Posts: 18261
Liked: 1561 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Moving fileserver datadisks to vmware

Post by foggy » Mar 11, 2014 7:05 am

jja wrote:Changed our backup jobs today from chaining to starting 20:00, 20:01, 20:02 and so forth.
If I understand it correctly, this will let the jobs wait for available resources and not "jam" all resources.
Interesting to see how it works :)
Kindly let us know what kind of improvement you get.
jja wrote:This is why I'd like to add some proxies, would be nice to offload and lower our backup window even more :)
Additional proxies will indeed allow you to improve overall backup performance. Moreover, having virtual proxies on the hosts will allow for faster restores using hotadd mode.

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Moving fileserver datadisks to vmware

Post by jja » Mar 12, 2014 6:05 am

foggy wrote: Kindly let us know what kind of improvement you get.
Well, this was kinda fun :)
Usually our backup ends between 02:00 and 04:00 depending on data changes.
After I disabled chaining, all our backups were done by 23:15.
Thats 3-5 hours faster, and this is before adding any proxies aswell.
And I believe parallell processing is turned off as well.


I've enabled parallell processing now, but we have only enabled 2 concurrent tasks on the backup server.
Is there any good ways to determine what the ideal setting would be?
The server is a dual 6-core with 96GB ram, so the bottleneck would probably be elsewhere (SAN, vsphere environment).

If I add a couple of proxies, will this automatically increase the parallell processing capabilities?
(sorry If I'm a bit of topic from the original topic, but this is in preparation for migrating the fileserver).

-J

veremin
Product Manager
Posts: 16892
Liked: 1435 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Moving fileserver datadisks to vmware

Post by veremin » Mar 12, 2014 8:09 am

Usually our backup ends between 02:00 and 04:00 depending on data changes. After I disabled chaining, all our backups were done by 23:15. Thats 3-5 hours faster, and this is before adding any proxies aswell.
This is the exact reason why we recommend using maximum number of concurrent tasks, instead of job chaining. In most cases, the latter does more harm than good.
I've enabled parallel processing now, but we have only enabled 2 concurrent tasks on the backup server.
With 6 cores in place, I believe, you can increase the number of proxy concurrent tasks up to 6. However, if either source or target can't cope with backup load well, the increased number of proxy concurrent tasks won't make any significant difference. In other words, if disk reader spends all of the time reading the data or if the target disk writer component spends most of its time performing I/O to backup files, the changes made to proxy component won't help a lot.
If I add a couple of proxies, will this automatically increase the parallell processing capabilities?
Yes, it will give you additional slots.

Thanks.

dellock6
Veeam Software
Posts: 5734
Liked: 1625 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Moving fileserver datadisks to vmware

Post by dellock6 » Mar 12, 2014 9:21 am

Basically, when parallel processing is enabled, think about a 1:1 relationship between a proxy core and a VMDK.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Moving fileserver datadisks to vmware

Post by jja » Mar 14, 2014 7:12 am 2 people like this post

Pretty interesting results after enabling parallell processing and adding 1 proxy.
(going to add 6 more proxies today).

The backups which usually lasted from 20:00 to 0215-0430, then down to 23:15, are now down to 1h25min :)
Not bad for 7 jobs with a total of 116 vm's :)

-j

dellock6
Veeam Software
Posts: 5734
Liked: 1625 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Moving fileserver datadisks to vmware

Post by dellock6 » Mar 14, 2014 10:21 am

Nice numbers! Thanks for posting them :)

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2019
Veeam VMCE #1

Post Reply

Who is online

Users browsing this forum: Google [Bot] and 28 guests