-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
Spitballing backup speeds
Guys,
I just wanted to get your thoughts on the speed of our backups given our enviroment and other factors.
Enviroment
Vsphere 6.7
HPE 3Par 8200, 45 x 15k disks
Storeonce connected via catalyst
everything connected via 8GB or 16GB Fibre Channel
VBR Server has 24 cores and 70GB RAM - 8GB Fibre
Proxy 1 has 12 cores and 147GB Ram - 8GB Fibre
Proxy 2 has 12 cores and 147GB Ram - 8GB Fibre
Latest VBR Installed.
On a single VM backup (active Full) we see max Processing rate 90MB/s, the processing rate matches what our StoreOnce reports as being processed.
In a multi VM backup the processing rate jumps and the StoreOnce reports the same.
I have seen our StoreOnce processing rate at 1GB/s during multi job backups so clearly the bandwidth is there to process that fast, My question is why does a single VM job only run at a fraction of what is clearly possible during multi VM jobs?
I also monitor the proxy HBA utlilisation durig jobs and its never above 15%, can anyone shed any light?
Thanks in advance
I just wanted to get your thoughts on the speed of our backups given our enviroment and other factors.
Enviroment
Vsphere 6.7
HPE 3Par 8200, 45 x 15k disks
Storeonce connected via catalyst
everything connected via 8GB or 16GB Fibre Channel
VBR Server has 24 cores and 70GB RAM - 8GB Fibre
Proxy 1 has 12 cores and 147GB Ram - 8GB Fibre
Proxy 2 has 12 cores and 147GB Ram - 8GB Fibre
Latest VBR Installed.
On a single VM backup (active Full) we see max Processing rate 90MB/s, the processing rate matches what our StoreOnce reports as being processed.
In a multi VM backup the processing rate jumps and the StoreOnce reports the same.
I have seen our StoreOnce processing rate at 1GB/s during multi job backups so clearly the bandwidth is there to process that fast, My question is why does a single VM job only run at a fraction of what is clearly possible during multi VM jobs?
I also monitor the proxy HBA utlilisation durig jobs and its never above 15%, can anyone shed any light?
Thanks in advance
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: Spitballing backup speeds
Hi Martyn,
I suppose that in your case 90 MB/s is the limit of write speed in one stream as long as the "bottleneck" is on Target. When multiple VMs are being processed, many streams are opened and the overall processing rate jumps. If I'm not mistaken such a scenario is not at all rare for dedupe appliances. For example, one of the reasons to implement vbk-per-vm was to optimize data transfer speed with storages capable of writing data in multiple streams.
Thanks!
I suppose that in your case 90 MB/s is the limit of write speed in one stream as long as the "bottleneck" is on Target. When multiple VMs are being processed, many streams are opened and the overall processing rate jumps. If I'm not mistaken such a scenario is not at all rare for dedupe appliances. For example, one of the reasons to implement vbk-per-vm was to optimize data transfer speed with storages capable of writing data in multiple streams.
Thanks!
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
Re: Spitballing backup speeds
Hey, thanks for the reply
The bottle neck for that single VM test was actually the Source.
Does that mean the 3par read speed
The bottle neck for that single VM test was actually the Source.
Does that mean the 3par read speed
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: Spitballing backup speeds
Yes, it is also possible. You can also contact our support team, upload debug logs and ask our engineers to check the correctness of this assumption. Please don't forget to share a support case ID.
Thanks!
Thanks!
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Spitballing backup speeds
Indeed, ingestion rate of a single write stream is typically limited on dedupe appliances so multi-VM/parallel jobs allow to achieve more. I'm a bit confused with the fact that the major bottleneck is being reported as source though, this speaks somewhat against the behavior you're seeing with multi VM jobs.
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
Re: Spitballing backup speeds
yeah source being the bottleneck does seeam strange but it is how it is
Single Vm
MultiVM
Im not looking to raise a support ticket yet i just wanted to get the communites view, especially those with similar setups
Thanks
Single Vm
MultiVM
Im not looking to raise a support ticket yet i just wanted to get the communites view, especially those with similar setups
Thanks
-
- Enthusiast
- Posts: 43
- Liked: 8 times
- Joined: Nov 09, 2021 9:19 am
- Full Name: K Anand
- Contact:
Re: Spitballing backup speeds
I have almost a similar setup as yours..
HPE Storeonce
HPE Primera storage.
I have tried with catalyst repos on both Network and FC.
For a single VM backup, I tend to get around 200 MB/s . The bottleneck was Storeonce
When I run multiple VM backups concurrently, it goes up significantly --- for eg when I run 3 VMs in parallel, I got around 600 - 700 MB/s. The bottleneck was once again Storeonce.
I think I got a response on this forum that Storeonce is not recommended as primary backup repo...
So I had a spare HPE MSA storage with 20 TB free space. I created a backup repo on that and mounted as a drive in a windows server which I was using as a backup proxy. When I backup up to that repo, I got a single VM speed of around 700 MB/s.
HPE Storeonce
HPE Primera storage.
I have tried with catalyst repos on both Network and FC.
For a single VM backup, I tend to get around 200 MB/s . The bottleneck was Storeonce
When I run multiple VM backups concurrently, it goes up significantly --- for eg when I run 3 VMs in parallel, I got around 600 - 700 MB/s. The bottleneck was once again Storeonce.
I think I got a response on this forum that Storeonce is not recommended as primary backup repo...
So I had a spare HPE MSA storage with 20 TB free space. I created a backup repo on that and mounted as a drive in a windows server which I was using as a backup proxy. When I backup up to that repo, I got a single VM speed of around 700 MB/s.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Spitballing backup speeds
There's no transport mode selection tag on the screen for the single VM active full job run - just wanted to make sure it is also using direct SAN for processing...
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Spitballing backup speeds
Correct. So despite the bottleneck reporting the behavior looks expected. If you want to investigate the bottleneck thing - feel free to open a case with our technical support. Thanks!
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: Spitballing backup speeds
Just to rule out any FC or SAN related problems, setup a hotadd/virtual appliance proxy, configure it in your job and check how/if the performance changes.
Regarding your other post: a deduplication appliance is never recommended as a primary backup repository. While you may be able to achieve decent backup rates, the restore performance will be rather bad compared to a regular disk storage.
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: Spitballing backup speeds
Eek indeed
But don't fret! The big thing about any deduplication appliance (Storeonce, DataDomain, Exagrid, etc) is that the behavior of most of the normal IO patterns for speed boosts that primary backups use (merging, synthetic fulls) are inherently incompatible with how deduplication storage works. Dedupe storages thrive on sequential read/write, but these operations are inherently random IO, aka heavy seeking.
Restores in particular are a pain because that's where the heavy random read comes the most, and the features that mitigate this (compact full) suffer from the same issues to an even bigger degree.
But, if you have a decent dedupe appliance, it is possible to do so, just accepting that it will be some factor slower than non-dedupe storages.
I have clients who went all in on such appliances and for better or worse accepted that this is just the speed it can do; if you are okay with this and write restore SLAs to accommodate for this, then no worries! Just don't expect magically fast restore speeds.
I have to give HPE credit as they are at least very honest about this in their reference architecture white papers with Veeam. (Their engineers I cannot claim are so honest regrettably after working with clients on a number of cases regarding read performance).
So the simple truth with dedupe is: You can use it as primary, but you need to accept the inherent restore limitations that will be introduced. You must not expect speedy file level restores or instant recovery operations; these operations are antithetical to how deduplication appliances work.
I do like these devices in many cases, but I always instruct my team to repeat multiple times what should be expected, and that this is just how the physics works out for these devices.
-
- Influencer
- Posts: 21
- Liked: 2 times
- Joined: Dec 05, 2014 2:35 pm
- Full Name: Martyn Jeffrey
- Contact:
Re: Spitballing backup speeds
thanks for all the input guys
Who is online
Users browsing this forum: No registered users and 28 guests