-
- Novice
- Posts: 6
- Liked: never
- Joined: Nov 20, 2013 5:55 am
- Full Name: Michael Cremer
- Contact:
Veeam Bottleneck: Target
Hello all,
First let me start off by saying that I appreciate any and all help that the community/Veeam support can provide in this matter and look forward to working through this with your assistance! The issue: depending on what proxy transport mode I select I witness different max speeds in relation to our "target". Here are some more detailed examples:
Direct SAN - 280-350MB/sec processing speeds | Bottleneck Target (never ran a complete backup in this mode, only used it for testing purposes. I'm quite sure it would of dropped significantly from this number)
Virtual Appliance - 25MB/sec processing speeds | Bottleneck Target
Network Mode - 17MB/sec processing speeds | Bottleneck Target
During the Direct SAN testing I can watch the repository NIC go to 98% utilization (and stay there), while all other modes it will jump to 40% here and there but often sits at 0% utilizaiton, while most frequently is at 12%.
In my opinion the target being the bottleneck in Virtual Appliance & Network modes just doesn't make sense and seems very odd. My reasoning for this is simply based off of the fact that the Direct SAN pushed out considerably more data to the target. So even if Direct SAN dropped and pushed out 1Gbps wire speed (120~MB/sec) to the target how can it later be considered the bottleneck when the other modes can only read the source at 15-25MB/sec.
All of these tests were with the following Veeam topology:
1 Physical Backup Server/Repository
1 Virtual Proxy Server
The link between the Physical and Virtual is just a 1Gbps connection. The actual repository is a simple USB3.0 hard drive.
Dell PC6224 Switches
EMC VNXe3300 SAN/NAS
Dell PE R720
vSphere 5.5
Veam 7 R2
Proxy = Server 2008 R2
Backup/Repository = Server 2012
My main question consists of "Is there any logic to why the target shows up as the bottleneck in the two slower tests?". I'm pretty new to Veeam in general so, in addition to my main question, if you have any advice on how to increase the speed of the Virtual Appliance and/or Network transport modes I would love to hear it. The 25MB/sec really doesn't seem right to me. We don't really consider iSCSI as an option for this environment and would like to stick with NFS, that is why we aren't just running with the Direct SAN mode.
On a side note, any plans to offer "Direct NAS" mode? Would be easy to add NFS share permissions to the Veeam servers, put them in same VLAN/Subnet, and mount the NFS shares that hold the .vmdks directly in Veeam servers. That's the easy part, but is Veeam working on any super cool auto-magical way to access the read only .vmdks created by snapshots without making ESXi angry?
Thanks again for your help and input, and if you need more information definitely ask away!
First let me start off by saying that I appreciate any and all help that the community/Veeam support can provide in this matter and look forward to working through this with your assistance! The issue: depending on what proxy transport mode I select I witness different max speeds in relation to our "target". Here are some more detailed examples:
Direct SAN - 280-350MB/sec processing speeds | Bottleneck Target (never ran a complete backup in this mode, only used it for testing purposes. I'm quite sure it would of dropped significantly from this number)
Virtual Appliance - 25MB/sec processing speeds | Bottleneck Target
Network Mode - 17MB/sec processing speeds | Bottleneck Target
During the Direct SAN testing I can watch the repository NIC go to 98% utilization (and stay there), while all other modes it will jump to 40% here and there but often sits at 0% utilizaiton, while most frequently is at 12%.
In my opinion the target being the bottleneck in Virtual Appliance & Network modes just doesn't make sense and seems very odd. My reasoning for this is simply based off of the fact that the Direct SAN pushed out considerably more data to the target. So even if Direct SAN dropped and pushed out 1Gbps wire speed (120~MB/sec) to the target how can it later be considered the bottleneck when the other modes can only read the source at 15-25MB/sec.
All of these tests were with the following Veeam topology:
1 Physical Backup Server/Repository
1 Virtual Proxy Server
The link between the Physical and Virtual is just a 1Gbps connection. The actual repository is a simple USB3.0 hard drive.
Dell PC6224 Switches
EMC VNXe3300 SAN/NAS
Dell PE R720
vSphere 5.5
Veam 7 R2
Proxy = Server 2008 R2
Backup/Repository = Server 2012
My main question consists of "Is there any logic to why the target shows up as the bottleneck in the two slower tests?". I'm pretty new to Veeam in general so, in addition to my main question, if you have any advice on how to increase the speed of the Virtual Appliance and/or Network transport modes I would love to hear it. The 25MB/sec really doesn't seem right to me. We don't really consider iSCSI as an option for this environment and would like to stick with NFS, that is why we aren't just running with the Direct SAN mode.
On a side note, any plans to offer "Direct NAS" mode? Would be easy to add NFS share permissions to the Veeam servers, put them in same VLAN/Subnet, and mount the NFS shares that hold the .vmdks directly in Veeam servers. That's the easy part, but is Veeam working on any super cool auto-magical way to access the read only .vmdks created by snapshots without making ESXi angry?
Thanks again for your help and input, and if you need more information definitely ask away!
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam Bottleneck: Target
Hi, please open a support case, as these sort of problems can be troubleshoot in no time over webex, comparing to a few weeks of typically unsuccessful guessing over the forum posts. Thanks.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Veeam Bottleneck: Target
What backup method do you use - forward or reversed incremental?
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Veeam Bottleneck: Target
I was going for the same question. With reversed incremental also virtual appliance and netwwork mode can put enough load on the target to make it the bottleneck.
Also, Michael, it's better remember that bottleneck stats does not always means there is a problem, it only shows you the slowest ring of the backup chain.
About NFS support, it is limited by VMware VADP libraries, by now Direct SAN mode works only with block storage. It would be cool indeed to have direct SAN with NFS storage.
Luca.
Also, Michael, it's better remember that bottleneck stats does not always means there is a problem, it only shows you the slowest ring of the backup chain.
About NFS support, it is limited by VMware VADP libraries, by now Direct SAN mode works only with block storage. It would be cool indeed to have direct SAN with NFS storage.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 6
- Liked: never
- Joined: Nov 20, 2013 5:55 am
- Full Name: Michael Cremer
- Contact:
Re: Veeam Bottleneck: Target
Thanks for the replies everyone
@ Gostev: I'll take your advice and reach out to the Veeam technical support
As for the backup method, I'm actually using both Forwards and Reverse (2 different backup jobs). I had a forward backup run overnight using "network" transport mode that ended up reaching 72MB/sec by the end of the backup, while the reverse backup wasn't able to finish in it's window. Reverse backup was also using "network" transport mode and was running at 25MB/sec before it was canceled.
@ Gostev: I'll take your advice and reach out to the Veeam technical support
As for the backup method, I'm actually using both Forwards and Reverse (2 different backup jobs). I had a forward backup run overnight using "network" transport mode that ended up reaching 72MB/sec by the end of the backup, while the reverse backup wasn't able to finish in it's window. Reverse backup was also using "network" transport mode and was running at 25MB/sec before it was canceled.
-
- Service Provider
- Posts: 182
- Liked: 48 times
- Joined: Sep 03, 2012 5:28 am
- Full Name: Yizhar Hurwitz
- Contact:
Re: Veeam Bottleneck: Target
Hi.
> The link between the Physical and Virtual is just a 1Gbps connection. The actual repository is a simple USB3.0 hard drive
A USB 3.0 drive is not the right tool for the job.
Sharing it via NFS doesn't make it better.
Your primary repository should be a faster device, and the use of external USB disks mainly for secondary backups to take offsite.
Yizhar
> The link between the Physical and Virtual is just a 1Gbps connection. The actual repository is a simple USB3.0 hard drive
A USB 3.0 drive is not the right tool for the job.
Sharing it via NFS doesn't make it better.
Your primary repository should be a faster device, and the use of external USB disks mainly for secondary backups to take offsite.
Yizhar
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Veeam Bottleneck: Target
Taking your target storage into account, the discrepancy in speed between different backup mode is something expected. With this kind of device the target storage will virtually always be the bottleneck, since this mode requires 3x the I/O of a full backup/incremental backup on the target storage. The random nature of I/O in case of reversed incremental mode (in contrast to sequential write of active full/incremental) makes the whole performance even worse.
Anyway, the support team will be able to assist you with the log and infrastructure analyzing.
Thanks.
Anyway, the support team will be able to assist you with the log and infrastructure analyzing.
Thanks.
Who is online
Users browsing this forum: Google [Bot] and 57 guests