-
- Enthusiast
- Posts: 46
- Liked: 8 times
- Joined: Nov 13, 2013 6:40 am
- Full Name: Jannis Jacobsen
- Contact:
Moving fileserver datadisks to vmware
Hi!
Today we run our fileserver as a virtual server with direct iscsi connections to a dedicated san for shared data.
We are considering configuring a new 2012 R2 server and replicating all the data to vmdk files instead.
As of today there is about 50TB of data needing backup.
Are there any drawbacks of doing this regarding backup by veeam, as just 1 vm will be 50TB+?
We will create a single job for this vm due to the size.
We have 1 Veeam server that runs all the backups, and stores everything on a 2012 R2 server with Veeam components installed.
The Veeam server has direct iscsi connections to all the vmware iscsi luns.
Are there any best practices we should implement?
-J
Today we run our fileserver as a virtual server with direct iscsi connections to a dedicated san for shared data.
We are considering configuring a new 2012 R2 server and replicating all the data to vmdk files instead.
As of today there is about 50TB of data needing backup.
Are there any drawbacks of doing this regarding backup by veeam, as just 1 vm will be 50TB+?
We will create a single job for this vm due to the size.
We have 1 Veeam server that runs all the backups, and stores everything on a 2012 R2 server with Veeam components installed.
The Veeam server has direct iscsi connections to all the vmware iscsi luns.
Are there any best practices we should implement?
-J
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Moving fileserver datadisks to vmware
Hi Jannis,
one suggestion I can give you is, if possible, to split those 50TB in multiple vmdk disks.
Based on how our parallel processing works, you can spread the load of saving different vmdks of the same VM to different Veeam proxies in parallel, while a single huge vmdk can only be processed sequentially by one process.
In this way, you can greatly increase processing speed.
Other then that, there are no issues in saving such a large VMs, we have several feedbacks from customers saving very large VMs.
Luca.
one suggestion I can give you is, if possible, to split those 50TB in multiple vmdk disks.
Based on how our parallel processing works, you can spread the load of saving different vmdks of the same VM to different Veeam proxies in parallel, while a single huge vmdk can only be processed sequentially by one process.
In this way, you can greatly increase processing speed.
Other then that, there are no issues in saving such a large VMs, we have several feedbacks from customers saving very large VMs.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Moving fileserver datadisks to vmware
I would agree with Luca as splitting data into multiple virtual disks would allow you to backup this VM faster (at least during the initial full job run).
-
- Enthusiast
- Posts: 46
- Liked: 8 times
- Joined: Nov 13, 2013 6:40 am
- Full Name: Jannis Jacobsen
- Contact:
Re: Moving fileserver datadisks to vmware
I forgot to mention that we will (as it is today), split the data over several drives (or vmdk's).
We will probably add 10 10TB vmdk's (some might be 12TB due to large datasets).
Going to look into adding some proxies, guessing 1 vm proxy dedicated to each physical host should be a good start?
Thanks! Looking forward to getting the fileserver fully virtualized
-J
We will probably add 10 10TB vmdk's (some might be 12TB due to large datasets).
Going to look into adding some proxies, guessing 1 vm proxy dedicated to each physical host should be a good start?
Thanks! Looking forward to getting the fileserver fully virtualized
-J
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Moving fileserver datadisks to vmware
We do not have specific best practices about a 1:1 relationship between virtual proxies and underlying ESXi servers, unless you are using an NFS storage; in this case VMware has problems regarding NFS locking while hotadding VMDK via network (http://kb.vmware.com/selfservice/micros ... Id=2010953). We have a new registry key in our latest patch #3 that can help you prevent this:
HKLM\SOFTWARE\VeeaM\Veeam Backup and Replication\EnableSameHostHotaddMode (DWORD) : Intelligent load balancing can now be configured to give preference to backup proxy located on the same host
Create the DWORD key there and set it to 1.
Luca.
HKLM\SOFTWARE\VeeaM\Veeam Backup and Replication\EnableSameHostHotaddMode (DWORD) : Intelligent load balancing can now be configured to give preference to backup proxy located on the same host
Create the DWORD key there and set it to 1.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Moving fileserver datadisks to vmware
Depends on your setup and configuration. I see that you're using SAN storage, so I would recommend going with physical proxy server configured to work in direct SAN mode, should give you good job performance rates.jja wrote:Going to look into adding some proxies, guessing 1 vm proxy dedicated to each physical host should be a good start?
-
- Enthusiast
- Posts: 46
- Liked: 8 times
- Joined: Nov 13, 2013 6:40 am
- Full Name: Jannis Jacobsen
- Contact:
Re: Moving fileserver datadisks to vmware
Our veeam backup server has direct san access, and for now I'm just testing to see if we can increase the performance without buying more hardwareVitaliy S. wrote: Depends on your setup and configuration. I see that you're using SAN storage, so I would recommend going with physical proxy server configured to work in direct SAN mode, should give you good job performance rates.
Changed our backup jobs today from chaining to starting 20:00, 20:01, 20:02 and so forth.
If I understand it correctly, this will let the jobs wait for available resources and not "jam" all resources.
Interesting to see how it works
This is why I'd like to add some proxies, would be nice to offload and lower our backup window even more
-j
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Moving fileserver datadisks to vmware
Kindly let us know what kind of improvement you get.jja wrote:Changed our backup jobs today from chaining to starting 20:00, 20:01, 20:02 and so forth.
If I understand it correctly, this will let the jobs wait for available resources and not "jam" all resources.
Interesting to see how it works
Additional proxies will indeed allow you to improve overall backup performance. Moreover, having virtual proxies on the hosts will allow for faster restores using hotadd mode.jja wrote:This is why I'd like to add some proxies, would be nice to offload and lower our backup window even more
-
- Enthusiast
- Posts: 46
- Liked: 8 times
- Joined: Nov 13, 2013 6:40 am
- Full Name: Jannis Jacobsen
- Contact:
Re: Moving fileserver datadisks to vmware
Well, this was kinda funfoggy wrote: Kindly let us know what kind of improvement you get.
Usually our backup ends between 02:00 and 04:00 depending on data changes.
After I disabled chaining, all our backups were done by 23:15.
Thats 3-5 hours faster, and this is before adding any proxies aswell.
And I believe parallell processing is turned off as well.
I've enabled parallell processing now, but we have only enabled 2 concurrent tasks on the backup server.
Is there any good ways to determine what the ideal setting would be?
The server is a dual 6-core with 96GB ram, so the bottleneck would probably be elsewhere (SAN, vsphere environment).
If I add a couple of proxies, will this automatically increase the parallell processing capabilities?
(sorry If I'm a bit of topic from the original topic, but this is in preparation for migrating the fileserver).
-J
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Moving fileserver datadisks to vmware
This is the exact reason why we recommend using maximum number of concurrent tasks, instead of job chaining. In most cases, the latter does more harm than good.Usually our backup ends between 02:00 and 04:00 depending on data changes. After I disabled chaining, all our backups were done by 23:15. Thats 3-5 hours faster, and this is before adding any proxies aswell.
With 6 cores in place, I believe, you can increase the number of proxy concurrent tasks up to 6. However, if either source or target can't cope with backup load well, the increased number of proxy concurrent tasks won't make any significant difference. In other words, if disk reader spends all of the time reading the data or if the target disk writer component spends most of its time performing I/O to backup files, the changes made to proxy component won't help a lot.I've enabled parallel processing now, but we have only enabled 2 concurrent tasks on the backup server.
Yes, it will give you additional slots.If I add a couple of proxies, will this automatically increase the parallell processing capabilities?
Thanks.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Moving fileserver datadisks to vmware
Basically, when parallel processing is enabled, think about a 1:1 relationship between a proxy core and a VMDK.
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Enthusiast
- Posts: 46
- Liked: 8 times
- Joined: Nov 13, 2013 6:40 am
- Full Name: Jannis Jacobsen
- Contact:
Re: Moving fileserver datadisks to vmware
Pretty interesting results after enabling parallell processing and adding 1 proxy.
(going to add 6 more proxies today).
The backups which usually lasted from 20:00 to 0215-0430, then down to 23:15, are now down to 1h25min
Not bad for 7 jobs with a total of 116 vm's
-j
(going to add 6 more proxies today).
The backups which usually lasted from 20:00 to 0215-0430, then down to 23:15, are now down to 1h25min
Not bad for 7 jobs with a total of 116 vm's
-j
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Moving fileserver datadisks to vmware
Nice numbers! Thanks for posting them
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: No registered users and 31 guests