-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 04, 2014 7:23 am
- Contact:
Direct san access bottleneck source
We have veeam installed on a standalone server with direct san access. Network mode fallback is disabled (san mode will always be used.)
Destination is an iscsi target with 4x1Gbps multipath.
With a benchmark tool I get read speed of 530MB/s / write speed 280MB/s (seq) on a san lun and 360MB/s read/240MB/s write (seq) on
the destination lun.
So a new veeam backup job transfers the backup only with gigabit speed (70-100MB/s), why?
And it told me, that the bottleneck is the source· (04.03.2014 08:21:50 :: Busy: Source 99% > Proxy 62% > Network 3% > Target 0%)
Thanks
Destination is an iscsi target with 4x1Gbps multipath.
With a benchmark tool I get read speed of 530MB/s / write speed 280MB/s (seq) on a san lun and 360MB/s read/240MB/s write (seq) on
the destination lun.
So a new veeam backup job transfers the backup only with gigabit speed (70-100MB/s), why?
And it told me, that the bottleneck is the source· (04.03.2014 08:21:50 :: Busy: Source 99% > Proxy 62% > Network 3% > Target 0%)
Thanks
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Direct san access bottleneck source
What kind of primary storage do you have?
Try to apply these recommendations, should increase the overall performance of your backup jobs:
Improving direct-from-SAN backup speed with iSCSI SAN
Try to apply these recommendations, should increase the overall performance of your backup jobs:
Improving direct-from-SAN backup speed with iSCSI SAN
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 04, 2014 7:23 am
- Contact:
Re: Direct san access bottleneck source
hp 3par storagefoggy wrote:What kind of primary storage do you have?
Sry I did not wrote that we have fibre to the primary storage and iscsi to the destination.foggy wrote: Try to apply these recommendations, should increase the overall performance of your backup jobs:
Improving direct-from-SAN backup speed with iSCSI SAN
[/url]
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Direct san access bottleneck source
Got it. Then it is worth checking drivers and firmware on HBA and switches and probably playing with MPIO.
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 04, 2014 7:23 am
- Contact:
Re: Direct san access bottleneck source
But see my benchmark, why i get the huge speed there and with veeam only gigabit?foggy wrote:Got it. Then it is worth checking drivers and firmware on HBA and switches and probably playing with MPIO.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct san access bottleneck source
Whre do you run the tests from? The Veeam proxy?
Also, it would be useful to know which kind of backup method are you using. for example, with Reversed Backup the tests you did are not real, since reversed icnremental uses 3 I/O per byte instead of 1, and is quite random more than sequential.
In general, I/O tests looking at maximum speed are not correct to measure the expected Veeam speed. What kind of test did you run? I suspect a large sequential operation...
Luca.
Also, it would be useful to know which kind of backup method are you using. for example, with Reversed Backup the tests you did are not real, since reversed icnremental uses 3 I/O per byte instead of 1, and is quite random more than sequential.
In general, I/O tests looking at maximum speed are not correct to measure the expected Veeam speed. What kind of test did you run? I suspect a large sequential operation...
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 04, 2014 7:23 am
- Contact:
Re: Direct san access bottleneck source
Sry for the late response, I had no time until today....
I don't get it. I did some tests again.
Design:
fibre lun - san switch - standalone windows server (veeam) - (iscsi 4x1Gbps) - backup storage server
Fullbackup, vm (80GB), took 15min, that is actually gigabit speed. Again I copy from firbe lun to iscsi lun (4x1Gbs). Bottleneck is Source. (19.03.2014 09:02:28 :: Load: Source 95% > Proxy 67% > Network 17% > Target 11%)
processing rate: 94MB/s, Speed: 122,9MB/s.
I did the same test (same vm), on the current veeam server (virtual machine, hotadd, destination iscsi 1Gbps), this took 12min, faster than the setup above?? (Bottleneck is Source too, 19.03.2014 09:22:33 :: Load: Source 99% > Proxy 18% > Network 6% > Target 0%, Processing rate: 130MB/s, Speed: 160MB/s)
i/o Benchmark (on veeam proxy, crystaldiskmark):
Thanks for help.
Yes.dellock6 wrote:Whre do you run the tests from? The Veeam proxy?
All tests with veeam were done always with new jobs and one vm to backup (default settings, (incremental , full)), so I would get always a full backup with the first run.dellock6 wrote: Also, it would be useful to know which kind of backup method are you using. for example, with Reversed Backup the tests you did are not real, since reversed icnremental uses 3 I/O per byte instead of 1, and is quite random more than sequential.
In general, I/O tests looking at maximum speed are not correct to measure the expected Veeam speed. What kind of test did you run? I suspect a large sequential operation...
I don't get it. I did some tests again.
Design:
fibre lun - san switch - standalone windows server (veeam) - (iscsi 4x1Gbps) - backup storage server
Fullbackup, vm (80GB), took 15min, that is actually gigabit speed. Again I copy from firbe lun to iscsi lun (4x1Gbs). Bottleneck is Source. (19.03.2014 09:02:28 :: Load: Source 95% > Proxy 67% > Network 17% > Target 11%)
processing rate: 94MB/s, Speed: 122,9MB/s.
I did the same test (same vm), on the current veeam server (virtual machine, hotadd, destination iscsi 1Gbps), this took 12min, faster than the setup above?? (Bottleneck is Source too, 19.03.2014 09:22:33 :: Load: Source 99% > Proxy 18% > Network 6% > Target 0%, Processing rate: 130MB/s, Speed: 160MB/s)
i/o Benchmark (on veeam proxy, crystaldiskmark):
Code: Select all
[b]Source (Fibre lun):[/b]
5 times, 1000MB:
Seq read: 460MB/s / write: 345MB/s
512K read: 304MB/s / write: 211MB/s
4K read: 22MB/s / write: 12MB/s
4KQD32 read: 240MB/s / write: 8MB/s
[b]Destination (iscsi lun):[/b]
5 times, 1000MB:
Seq read: 365MB/s / write: 240MB/s
512K read: 191MB/s / write: 128MB/s
4K read: 12MB/s / write: 4MB/s
4KQD32 read: 187MB/s / write: 4MB/s
Thanks for help.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct san access bottleneck source
The DirectSAN test shows higher utilization of the physical proxy, maybe the virtual proxy has much compute power than the physical one? DirectSAN is supposed to be faster, but only if all other elements in the "data pipe" have the same characteristics.
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 5
- Liked: never
- Joined: Mar 04, 2014 7:23 am
- Contact:
Re: Direct san access bottleneck source
That could be, but I guess then the bottleneck should be the proxy instead of source. And the pysical proxy did never consumed 100% cpu, averrage is 70%.dellock6 wrote:The DirectSAN test shows higher utilization of the physical proxy, maybe the virtual proxy has much compute power than the physical one? DirectSAN is supposed to be faster, but only if all other elements in the "data pipe" have the same characteristics.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct san access bottleneck source
No, bottlenecks are not absolute values, they show you the time spent by a resource in an active state. In your situation, storage is always the bottleneck since it's always running for 99% or 95% of the time. In one case, proxy was running for only 18% of the time, that means the other 82% was waiting for another component (and is supposed it's the source, looking at numbers).
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: Bing [Bot], CoLa, galcand, Majestic-12 [Bot] and 283 guests