-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Seeking strategy advice
Hi all,
Two of our clusters use an NFS SAN which we use Veeam to backup the guests to a separate NAS. We cannot use Virtual Appliance Mode as we get VM freezing during snapshot removal as described in http://kb.vmware.com/selfservice/micros ... Id=2010953 and have had to fall back to using Network Mode. Unfortunately this method is so much slower than the Virtual Appliance Mode that the backups do not finish each night and can run on for days (~60MB/s over ~10MB/s). If we use Appliance Mode the backups run very quickly but each VM freezes on snapshot removal unless on the proxy and guest being backed up are on the same host.
I am after some guidance on how to use Appliance Mode or another backup method some how rather than waiting for VMware to provide a fix!
Our setup:
VMware vSphere 5.1
Veeam 6.5
Each cluster has a Veeam proxy VM and controlled via another instance of Veeam on our management cluster.
Separate job for each cluster backs up to NAS with 2GB links.
Incremental Jobs
Using LAN storage optimisation (should we change to Local?)
Thank you
Max
Two of our clusters use an NFS SAN which we use Veeam to backup the guests to a separate NAS. We cannot use Virtual Appliance Mode as we get VM freezing during snapshot removal as described in http://kb.vmware.com/selfservice/micros ... Id=2010953 and have had to fall back to using Network Mode. Unfortunately this method is so much slower than the Virtual Appliance Mode that the backups do not finish each night and can run on for days (~60MB/s over ~10MB/s). If we use Appliance Mode the backups run very quickly but each VM freezes on snapshot removal unless on the proxy and guest being backed up are on the same host.
I am after some guidance on how to use Appliance Mode or another backup method some how rather than waiting for VMware to provide a fix!
Our setup:
VMware vSphere 5.1
Veeam 6.5
Each cluster has a Veeam proxy VM and controlled via another instance of Veeam on our management cluster.
Separate job for each cluster backs up to NAS with 2GB links.
Incremental Jobs
Using LAN storage optimisation (should we change to Local?)
Thank you
Max
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Seeking strategy advice
Hi Max, one workaround I found on this problem is to have proxy local to the same ESXi where the VM is, so hotadd is not running cross-host, have you tried it? I know in large cluster is not a viable solution, and also DRS can break the job configuration by moving VMs into other hosts, but at least is a first step to understand if this could be a solution.
Sadly NFS support on HotAdd is really buggy right now...
Luca.
Sadly NFS support on HotAdd is really buggy right now...
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Re: Seeking strategy advice
Hi Luca,
I'd love to be able to tie the proxy to the host however the clusters currently have 10 hosts with ~15 guests per host. I'd have to have a proxy per host to do this I guess?
Does the job select the proxy per backup job or per VM being backed up in that job?
Thank
Max
I'd love to be able to tie the proxy to the host however the clusters currently have 10 hosts with ~15 guests per host. I'd have to have a proxy per host to do this I guess?
Does the job select the proxy per backup job or per VM being backed up in that job?
Thank
Max
-
- Product Manager
- Posts: 20382
- Liked: 2294 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Seeking strategy advice
Hi.
Yes, you’re completely right. As Luca has already mentioned, a potential workaround for your issue would be having one proxy VM in each host of your cluster.
As for the proxy, it is selected on the job-level.
Hope this helps.
Thanks.
Yes, you’re completely right. As Luca has already mentioned, a potential workaround for your issue would be having one proxy VM in each host of your cluster.
As for the proxy, it is selected on the job-level.
Hope this helps.
Thanks.
-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Re: Seeking strategy advice
Hi,
Thank you for the reply. Does that mean I will need to create a job per host? DRS migrations would add to a job workload with this I imagine. We would like to have a backup job per cluster as its not really possible to separate these into various jobs due to the mixture of vm's we have, there is no logical grouping!
Max
Thank you for the reply. Does that mean I will need to create a job per host? DRS migrations would add to a job workload with this I imagine. We would like to have a backup job per cluster as its not really possible to separate these into various jobs due to the mixture of vm's we have, there is no logical grouping!
Max
-
- VP, Product Management
- Posts: 6034
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Seeking strategy advice
One thing that I'm surprised at is that you are only seeing 10MB/sec out of network mode. With vSphere 5.1 I've typically seen performance that is much better than this even for network mode (if you're luck enough to have 10Gb then network mode can really fly). However, even with 1Gb I've typically been able to see speeds of 40-50MB/sec (not that much worse than what you were showing for Hotadd). Are the Veeam proxies that are using network mode on the same subnet with the ESXi hosts? Do you feel that your management network should be able to support at least 1Gb?
-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Re: Seeking strategy advice
Hi,
The veeam poxies are on the same subnet as the hosts. The proxies also have a 2nd nic which is has direct access to the NAS which is in a storage network (different subnet) instead of routing. All networks are at least 1Gb.
A job is running and seeing a speed of ~22MBs but still not the 40-50MBs we'd like!
Max
The veeam poxies are on the same subnet as the hosts. The proxies also have a 2nd nic which is has direct access to the NAS which is in a storage network (different subnet) instead of routing. All networks are at least 1Gb.
A job is running and seeing a speed of ~22MBs but still not the 40-50MBs we'd like!
Max
-
- VP, Product Management
- Posts: 6034
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Seeking strategy advice
By "direct access to the NAS" are you saying this is where you are storing your backups? Justing wanting to be 100% clear. Also, the speeds which you mention above, are those for full backups?
I'm surprised that you are seeing such poor performance. Even in my home lab I see much better performance than this with network mode. I have an HP DL360-G5 running vSphere 5.1. The NFS datastore is provided by a simple Linux server running RHEL6 that is presenting an ext4 filesystem running on top of a mirror LVM setup with SATA drives, and the hardware is an old HP x86 workstation circa 2003. In other words, nothing to write home about. My network is a 1Gb network using Netgear ProSafe+ switches that I purchased at the local office supply store, in other words, also nothing very special. Even in this very sub-par setup, network mode backups achieve right at 50MB/sec when backing up the NFS volumes, while hotadd can do about 60MB/sec from the same NFS volumes, so a small decrease, but not huge.
I'm curious if you can tell me your storage latency during backups (based on vCenter/Veeam ONE graphs) and also share the job bottleneck statistics. Also, are you perhaps enabling SSL for network mode? Using SSL for data transfer has a HUGE negative performance impact. What about any traffic shaping rules on any of the management network settings on the ESXi hosts?
I'm surprised that you are seeing such poor performance. Even in my home lab I see much better performance than this with network mode. I have an HP DL360-G5 running vSphere 5.1. The NFS datastore is provided by a simple Linux server running RHEL6 that is presenting an ext4 filesystem running on top of a mirror LVM setup with SATA drives, and the hardware is an old HP x86 workstation circa 2003. In other words, nothing to write home about. My network is a 1Gb network using Netgear ProSafe+ switches that I purchased at the local office supply store, in other words, also nothing very special. Even in this very sub-par setup, network mode backups achieve right at 50MB/sec when backing up the NFS volumes, while hotadd can do about 60MB/sec from the same NFS volumes, so a small decrease, but not huge.
I'm curious if you can tell me your storage latency during backups (based on vCenter/Veeam ONE graphs) and also share the job bottleneck statistics. Also, are you perhaps enabling SSL for network mode? Using SSL for data transfer has a HUGE negative performance impact. What about any traffic shaping rules on any of the management network settings on the ESXi hosts?
-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Re: Seeking strategy advice
Thanks for your response. This spurned me into looking at a network issue.
We've identified a problem with the networking topology in which backup data was being routed via another network to get to the NAS. This meant it was being throttled. We've fixed this and seeing speeds of ~45MBs which is a vast improvement!
Now we'd just like to find out if we can get back to using Application Mode!
Thanks
We've identified a problem with the networking topology in which backup data was being routed via another network to get to the NAS. This meant it was being throttled. We've fixed this and seeing speeds of ~45MBs which is a vast improvement!
Now we'd just like to find out if we can get back to using Application Mode!
Thanks
-
- VP, Product Management
- Posts: 6034
- Liked: 2859 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Seeking strategy advice
Unfortuantely, since the issue with Hotadd and NFS is a VMware bug, I think they are really the only ones that can fix it, as you can see from the KB article, it even impacts their own data protection product. I'm assuming you attempte the workaround mentioned in the KB article without success (I've had one customer report that this did help significantly in their environment, but several others stated no difference). I'd suggest opening a support case with VMware. If enough customer open support cases it will put pressure for them to resolve the issue in a future update.
For now, the workarounds presented in this thread are all that are available. Designing jobs around hosts, and having a proxy per hosts, which of course has a significant resource and administrative overhead. Or just use network mode, which generally is not quite as fast, but is usually "good enough" and doesn't have the problem.
For now, the workarounds presented in this thread are all that are available. Designing jobs around hosts, and having a proxy per hosts, which of course has a significant resource and administrative overhead. Or just use network mode, which generally is not quite as fast, but is usually "good enough" and doesn't have the problem.
-
- Novice
- Posts: 6
- Liked: never
- Joined: Aug 09, 2011 4:17 pm
- Full Name: Max Frimond
- Contact:
Re: Seeking strategy advice
I'm going to see how we get on with Network mode but considering it is much much faster now we've fixed the problem, we can live with it until VMware fix the problem.
I've opened tickets with VMware about this. They initially told me the bug didn't exist on 5.1 which is why we upgraded. Went back to them after the upgrade telling them that it wasn't fixed, they then upgraded the KB article and said we'd have to wait!!
I've opened tickets with VMware about this. They initially told me the bug didn't exist on 5.1 which is why we upgraded. Went back to them after the upgrade telling them that it wasn't fixed, they then upgraded the KB article and said we'd have to wait!!
Who is online
Users browsing this forum: Semrush [Bot] and 38 guests