Comprehensive data protection for all workloads
Post Reply
SteveK821
Novice
Posts: 9
Liked: 2 times
Joined: Apr 01, 2016 11:56 am
Full Name: Stephen Kebbell
Location: Germany
Contact:

Feature Request: Throttle disk read-rate of proxy

Post by SteveK821 »

Hello,

we have a customer where they have some SAN latency issues during the backup window. The proxies use the Hot-Add transport method. Storage latency control is already activated, and we see messages in the backup log that it engages during the backup. However the lowest value possible is 10ms, which is too high for the customer. As a workaround they have reduced the maximum number of concurrent tasks on each proxy, but they would like to know if it is possible to throttle the read rate (MB/s) of a Proxy. Network Throttling rules are also being used, but this is not alleviating the problem. While the proxy is reading the data to back up, the production VM experiences higher I/O latency.
Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen. It still has to read the original data from the disk, either through the VM snapshot or the storage snapshot. The problem is not during VM snapshot deletion.
Throttling through vSphere is not supported in this environment and it would throttle the VM traffic as well.

Thanks and regards,
Stephen
HannesK
Product Manager
Posts: 14837
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by HannesK »

Hello,
hmm, what makes you think that a lower transfer rate would help? It could also be because of whatever side-effect with Hot-Add (support can help to check that). What kind of storage and protocol are we talking about?
However the lowest value possible is 10ms
it's 5ms in my environment. What version are you using?
but they would like to know if it is possible to throttle the read rate (MB/s) of a Proxy
That is only available on the repository level. Just curious, how many MBye/s are we talking about here?
Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen.
did you try it out? While I agree on the theory, I would still try different backup modes.

Best regards,
Hannes
weem
Novice
Posts: 9
Liked: 3 times
Joined: Jan 03, 2021 6:54 pm
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by weem »

what makes you think that a lower transfer rate would help
We have a backup time window of 14 Hours. The whole Backup finishes in 3 or 4 hours. It would be good to have the option to reduce the load on the SAN (e.g. throttling read rate through veeam) and stretch the backup time to 8-10 Hours. We have serious performance SLAs on our storage and the backup is really fast, but puts many extra load (Peaks) in a short period of time to our SAN which isn't useful. We already reduced the total number of proxys and cpu threads but we can't get lower. Throttling options through vmware are not an option in our environment. Same for Quality of Service on the san which would impact the regular virtual machine traffic as well.
it's 5ms in my environment. What version are you using?
V10 and V11, indeed it is 5 ms. The problem is that 5 ms is a very high value in a serious storage environment. And (in addition) the Throttling reaction is (to) slow in our oppinion. We have latency SLAs from 1 to 3 ms.
That is only available on the repository level. Just curious, how many MBye/s are we talking about here?
Nice would be a free adjustable setting. e.g. X-6 GB/s per job. In best case the rate can be adjusted for read and/or write so that the restore can run with full speed
did you try it out? While I agree on the theory, I would still try different backup modes.
Yes we did, the blocks have to be read from the disks, no difference in using Storage Snapshots regarding the used bandwith. We see other advantages in using storage snapshots, but not to reduce the used bandwith in a significant way.
HannesK
Product Manager
Posts: 14837
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by HannesK »

Hello,
SLAs from 1 to 3 ms.
ok, I will check whether we can reduce that value.
but we can't get lower.
just to clarify: with only one proxy task we are overloading our SAN?
Nice would be a free adjustable setting. e.g. X-6 GB/s per job
our goal is simplicity. That sounds like adding complexity.
restore can run with full speed
that makes sense to me. having a separate value for restore sounds at repository level sounds more useful than configuring that per job.
but not to reduce the used bandwith in a significant way.
what issues is high bandwidth using causing? I read about "peaks", but what is the impact? Only a graph that is too high, or any real impact on your production?

Best regards,
hannes
weem
Novice
Posts: 9
Liked: 3 times
Joined: Jan 03, 2021 6:54 pm
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by weem »

Hi,
ok, I will check whether we can reduce that value.
adjusting the sensivity would be nice, too. The mechanism is kinda slow and throttles too late.
just to clarify: with only one proxy task we are overloading our SAN?
One Task would bring other difficulties, and yes, one task is able to run at nearly full speed in a best case scenario.
our goal is simplicity. That sounds like adding complexity.
It is nearly the same mechanism as in the repository setting.
having a separate value for restore sounds at repository level sounds more useful than configuring that per job.
The settings on the repository are just useful for the load on the repository itself, not on the source (SAN). E.G.: You read with 6 GB/s from the san but nearly no block is new, so quite nothing has to be transfered which makes the traffic control (Network throttling) or the repository limits (for the repository itself) useless.
what issues is high bandwidth using causing? I read about "peaks", but what is the impact? Only a graph that is too high, or any real impact on your production?
You have IOPS, IOSize, Bandwith and Latency on the san. IOPS, IOSize and Bandwith affect the latency on your SAN. If you have very high IOPS usage, your latency can raise. If you have a very high bandwith Usage (Throughput), your latency can raise as well and this is our problem. We have impact on our production, yes.

Martin
HannesK
Product Manager
Posts: 14837
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by HannesK »

Hello,
adjusting the sensivity would be nice, too. The mechanism is kinda slow and throttles too late.
the 20 seconds exist, because that's what VMware provides.
E.G.: You read with 6 GB/s from the san but nearly no block is new, so quite nothing has to be transfered
did you maybe disable change block tracking in the advanced settings? Because per default, we only read blocks that changed.


and yes, one task is able to run at nearly full speed in a best case scenario
and
We have impact on our production, yes.
hmm, can you maybe share a support case number? because I cannot imagine how one task (or even a hand full) can have real impact on all-flash storage

Best regards,
Hannes
micoolpaul
Veeam Software
Posts: 219
Liked: 111 times
Joined: Jun 29, 2015 9:21 am
Full Name: Michael Paul
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by micoolpaul »

Couple of questions:

How is the SAN presented? Is it NFS v3? Have seen many hot-add issues using that.

Does the issue also occur when using another mode such as network?

Apologies if you answered these questions already, I didn’t see anything upon reading the posts.
-------------
Michael Paul
Veeam Data Cloud: Microsoft 365 Solution Engineer
weem
Novice
Posts: 9
Liked: 3 times
Joined: Jan 03, 2021 6:54 pm
Contact:

Re: Feature Request: Throttle disk read-rate of proxy

Post by weem »

Hi,
did you maybe disable change block tracking in the advanced settings? Because per default, we only read blocks that changed.
you refer to an example which I mentioned to show that the traffic control is not useful for all traffic. We use Change Block tracking but reset these on fulls.
hmm, can you maybe share a support case number? because I cannot imagine how one task (or even a hand full) can have real impact on all-flash storage
Sorry, but that has nothing todo with all-flash storage. As I mentioned earlier: You have IOPS, IOSize, Bandwith and Latency on the san. IOPS, IOSize and Bandwith affect the latency on your SAN. If you have very high IOPS usage, your latency can raise. If you have a very high bandwith Usage (Throughput), your latency can raise as well and this is our problem. We have impact on our production, yes. Every Storage has a limit on bandwith usage, all-flash or not all-flash. It doesn't make sense to argue about that.
I cannot imagine how one task (or even a hand full) can have real impact on all-flash storage
Thats simple, it's not difficult. It all depends on your SANs capability regarding Bandwith/Throughput.

The Veeam Developers had the idea to develop the function to throttle the destination Repository (MB/s), the network traffic to the repository (MB/S), and the maximum latency on the source (SAN). Why not the maximum read rate (MB/s) as on the destination Repo and the network traffic?
Post Reply

Who is online

Users browsing this forum: AndyCH, renatorichina and 179 guests