-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Nov 25, 2015 1:25 pm
- Contact:
performance guide transport mode
Hi,
¿Is there a benchmark or performance guide for the different transport modes in veeam v9?
and ¿where can i find the best practices guide for v9?
¿Is there a benchmark or performance guide for the different transport modes in veeam v9?
and ¿where can i find the best practices guide for v9?
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: performance guide transport mode
I don't think there is a best practices yet since v9 is still pretty new and almost everything from v8 applies.
Typically transport modes performance is in this order,
SAN
HotAdd
Network
So if you can get a block level capable server connected to your storage and make that the proxy that's the way to go. If it's FC then don't try and pass an FC adapter up into a VM, thats unsupported and won't be reliable, you need a physical server.
HotAdd is easy and typically almost as fast is long as you have the network to support it
Network/NBD is terrible on 1Gb, but much more performant on 10Gb and might be a batter option than HotAdd in that case
Typically transport modes performance is in this order,
SAN
HotAdd
Network
So if you can get a block level capable server connected to your storage and make that the proxy that's the way to go. If it's FC then don't try and pass an FC adapter up into a VM, thats unsupported and won't be reliable, you need a physical server.
HotAdd is easy and typically almost as fast is long as you have the network to support it
Network/NBD is terrible on 1Gb, but much more performant on 10Gb and might be a batter option than HotAdd in that case
-
- Veeam Software
- Posts: 170
- Liked: 43 times
- Joined: Mar 19, 2016 10:57 pm
- Full Name: Eugene Kashperovetskyi
- Location: Chicago, IL
- Contact:
Re: performance guide transport mode
A few things to add, while considering the options:
1) Networking type - 1/10/20/40Gbps. On fast networks, over 1Gbps, all modes will show a significant result.
2) Transport type:
- SAN - normally, the fastest method. Have to watch out for proper operational procedures, so not to end up with production volumes being auto-mounted. Gives the best results in terms of performance. Requires connectivity straight to the SAN fabric.
- HotAdd - my personal preference. Easy to deploy; allows to scale Proxy VM depending on the workload; storage-agnostic, should you change your backend infrastructure to a different type of storage vendor that may not work with Veeam as efficiently. Usually I see about 10% loss of performance, compared to SAN mode, mostly due to the overhead on running it on top of a Hypervisior. Allows to push traffic through the network different than systems Management one.
- NBD - the worst option, when it comes to performance. Ideally to be used with a Proxy appliance still to allow distributed mode for overall solution and optionally - push traffic over preferred network path instead of going through MGMT network of the hypervisor. Works well for a lower change rate environments, with CBT enabled and not too much changes happening within a reasonable window.
1) Networking type - 1/10/20/40Gbps. On fast networks, over 1Gbps, all modes will show a significant result.
2) Transport type:
- SAN - normally, the fastest method. Have to watch out for proper operational procedures, so not to end up with production volumes being auto-mounted. Gives the best results in terms of performance. Requires connectivity straight to the SAN fabric.
- HotAdd - my personal preference. Easy to deploy; allows to scale Proxy VM depending on the workload; storage-agnostic, should you change your backend infrastructure to a different type of storage vendor that may not work with Veeam as efficiently. Usually I see about 10% loss of performance, compared to SAN mode, mostly due to the overhead on running it on top of a Hypervisior. Allows to push traffic through the network different than systems Management one.
- NBD - the worst option, when it comes to performance. Ideally to be used with a Proxy appliance still to allow distributed mode for overall solution and optionally - push traffic over preferred network path instead of going through MGMT network of the hypervisor. Works well for a lower change rate environments, with CBT enabled and not too much changes happening within a reasonable window.
Eugene K
VMCA, VCIX-DCV, vExpert
VMCA, VCIX-DCV, vExpert
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Nov 25, 2015 1:25 pm
- Contact:
Re: performance guide transport mode
Thanks for your help guys.
In this particular case, we have a 10Gb mgmt network, and all the veeam infrastructure is virtual (proxies and backup server) another thing to consider is that the VMs to backup will have a lot of disks, so in v7 it is known that "Due to known limitations, a HotAdd proxy can process one VM disk from the same VM at a time, so it is recommended to configure several smaller proxies..." does this still applies to v9?
In this particular case, we have a 10Gb mgmt network, and all the veeam infrastructure is virtual (proxies and backup server) another thing to consider is that the VMs to backup will have a lot of disks, so in v7 it is known that "Due to known limitations, a HotAdd proxy can process one VM disk from the same VM at a time, so it is recommended to configure several smaller proxies..." does this still applies to v9?
-
- Product Manager
- Posts: 6576
- Liked: 773 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: performance guide transport mode
Hi,
Thank you.
No, it does not. Please see this KB for known limitations. Also in case of 10Gb mgmt Netowork Mode works fine.does this still applies to v9?
Thank you.
-
- Veeam Software
- Posts: 170
- Liked: 43 times
- Joined: Mar 19, 2016 10:57 pm
- Full Name: Eugene Kashperovetskyi
- Location: Chicago, IL
- Contact:
Re: performance guide transport mode
I'd recommend to make sure to also put in all 4 disk controllers to the Proxy VMs to allow up to 60 VMDK's processed concurrently by each proxy to work around the large number of disks and not to extend the backup window.
In fact, if the limit of VMDK's that can be attached to the Proxy is reached before all VMDK's for a VM are processed, you will see warnings like "HotAdd not supported, switching to Network mode".
In fact, if the limit of VMDK's that can be attached to the Proxy is reached before all VMDK's for a VM are processed, you will see warnings like "HotAdd not supported, switching to Network mode".
Eugene K
VMCA, VCIX-DCV, vExpert
VMCA, VCIX-DCV, vExpert
-
- Product Manager
- Posts: 6576
- Liked: 773 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: performance guide transport mode
Although the rule of 1 task = 1 core is a suggested practice, not a mandatory one, processing of 60 VMDKs in parallel would require a lot of CPU resources. Also Parallel processing may be limited by max concurrent tasks at the repository level.I'd recommend to make sure to also put in all 4 disk controllers to the Proxy VMs to allow up to 60 VMDK's processed concurrently by each proxy to work
Thank you.
-
- Expert
- Posts: 227
- Liked: 66 times
- Joined: Feb 18, 2013 10:45 am
- Full Name: Stan G
- Contact:
Re: performance guide transport mode
For peace of mind I usually force Networking Mode in small setups without dedicated IT staff or monitoring.
Why? Because I have seen more than 1 case in which the proxy-VM still had the hot-added disks present and snapshots were taken of that proxy-VM and it's a big mess.
Why? Because I have seen more than 1 case in which the proxy-VM still had the hot-added disks present and snapshots were taken of that proxy-VM and it's a big mess.
-
- Product Manager
- Posts: 6576
- Liked: 773 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: performance guide transport mode
Hi,
Thank you.
Were the snapshots taken by Veeam or by some other operator?Why? Because I have seen more than 1 case in which the proxy-VM still had the hot-added disks present and snapshots were taken of that proxy-VM and it's a big mess.
Thank you.
-
- Veeam Software
- Posts: 170
- Liked: 43 times
- Joined: Mar 19, 2016 10:57 pm
- Full Name: Eugene Kashperovetskyi
- Location: Chicago, IL
- Contact:
Re: performance guide transport mode
The VMDKs would remain attached to the HotAdd Proxy only if there was a disconnect from the vCenter or another communication/task failure, that didn't let the job to complete correctly.
If this commonly happens in an environment, chances are there's also Orphaned and Stale snapshots, since the same would be observed even for the backups taken with the Networking mode (vCenter connection dropped/timed out, Hostd process on ESXi never reported snapshot status correctly etc)
If this commonly happens in an environment, chances are there's also Orphaned and Stale snapshots, since the same would be observed even for the backups taken with the Networking mode (vCenter connection dropped/timed out, Hostd process on ESXi never reported snapshot status correctly etc)
Eugene K
VMCA, VCIX-DCV, vExpert
VMCA, VCIX-DCV, vExpert
Who is online
Users browsing this forum: Luca82 and 101 guests