-
- Novice
- Posts: 9
- Liked: 2 times
- Joined: Apr 01, 2016 11:56 am
- Full Name: Stephen Kebbell
- Location: Germany
- Contact:
Feature Request: Throttle disk read-rate of proxy
Hello,
we have a customer where they have some SAN latency issues during the backup window. The proxies use the Hot-Add transport method. Storage latency control is already activated, and we see messages in the backup log that it engages during the backup. However the lowest value possible is 10ms, which is too high for the customer. As a workaround they have reduced the maximum number of concurrent tasks on each proxy, but they would like to know if it is possible to throttle the read rate (MB/s) of a Proxy. Network Throttling rules are also being used, but this is not alleviating the problem. While the proxy is reading the data to back up, the production VM experiences higher I/O latency.
Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen. It still has to read the original data from the disk, either through the VM snapshot or the storage snapshot. The problem is not during VM snapshot deletion.
Throttling through vSphere is not supported in this environment and it would throttle the VM traffic as well.
Thanks and regards,
Stephen
we have a customer where they have some SAN latency issues during the backup window. The proxies use the Hot-Add transport method. Storage latency control is already activated, and we see messages in the backup log that it engages during the backup. However the lowest value possible is 10ms, which is too high for the customer. As a workaround they have reduced the maximum number of concurrent tasks on each proxy, but they would like to know if it is possible to throttle the read rate (MB/s) of a Proxy. Network Throttling rules are also being used, but this is not alleviating the problem. While the proxy is reading the data to back up, the production VM experiences higher I/O latency.
Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen. It still has to read the original data from the disk, either through the VM snapshot or the storage snapshot. The problem is not during VM snapshot deletion.
Throttling through vSphere is not supported in this environment and it would throttle the VM traffic as well.
Thanks and regards,
Stephen
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Hello,
hmm, what makes you think that a lower transfer rate would help? It could also be because of whatever side-effect with Hot-Add (support can help to check that). What kind of storage and protocol are we talking about?
Best regards,
Hannes
hmm, what makes you think that a lower transfer rate would help? It could also be because of whatever side-effect with Hot-Add (support can help to check that). What kind of storage and protocol are we talking about?
it's 5ms in my environment. What version are you using?However the lowest value possible is 10ms
That is only available on the repository level. Just curious, how many MBye/s are we talking about here?but they would like to know if it is possible to throttle the read rate (MB/s) of a Proxy
did you try it out? While I agree on the theory, I would still try different backup modes.Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen.
Best regards,
Hannes
-
- Novice
- Posts: 9
- Liked: 3 times
- Joined: Jan 03, 2021 6:54 pm
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
We have a backup time window of 14 Hours. The whole Backup finishes in 3 or 4 hours. It would be good to have the option to reduce the load on the SAN (e.g. throttling read rate through veeam) and stretch the backup time to 8-10 Hours. We have serious performance SLAs on our storage and the backup is really fast, but puts many extra load (Peaks) in a short period of time to our SAN which isn't useful. We already reduced the total number of proxys and cpu threads but we can't get lower. Throttling options through vmware are not an option in our environment. Same for Quality of Service on the san which would impact the regular virtual machine traffic as well.what makes you think that a lower transfer rate would help
V10 and V11, indeed it is 5 ms. The problem is that 5 ms is a very high value in a serious storage environment. And (in addition) the Throttling reaction is (to) slow in our oppinion. We have latency SLAs from 1 to 3 ms.it's 5ms in my environment. What version are you using?
Nice would be a free adjustable setting. e.g. X-6 GB/s per job. In best case the rate can be adjusted for read and/or write so that the restore can run with full speedThat is only available on the repository level. Just curious, how many MBye/s are we talking about here?
Yes we did, the blocks have to be read from the disks, no difference in using Storage Snapshots regarding the used bandwith. We see other advantages in using storage snapshots, but not to reduce the used bandwith in a significant way.did you try it out? While I agree on the theory, I would still try different backup modes.
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Hello,
Best regards,
hannes
ok, I will check whether we can reduce that value.SLAs from 1 to 3 ms.
just to clarify: with only one proxy task we are overloading our SAN?but we can't get lower.
our goal is simplicity. That sounds like adding complexity.Nice would be a free adjustable setting. e.g. X-6 GB/s per job
that makes sense to me. having a separate value for restore sounds at repository level sounds more useful than configuring that per job.restore can run with full speed
what issues is high bandwidth using causing? I read about "peaks", but what is the impact? Only a graph that is too high, or any real impact on your production?but not to reduce the used bandwith in a significant way.
Best regards,
hannes
-
- Novice
- Posts: 9
- Liked: 3 times
- Joined: Jan 03, 2021 6:54 pm
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Hi,
Martin
adjusting the sensivity would be nice, too. The mechanism is kinda slow and throttles too late.ok, I will check whether we can reduce that value.
One Task would bring other difficulties, and yes, one task is able to run at nearly full speed in a best case scenario.just to clarify: with only one proxy task we are overloading our SAN?
It is nearly the same mechanism as in the repository setting.our goal is simplicity. That sounds like adding complexity.
The settings on the repository are just useful for the load on the repository itself, not on the source (SAN). E.G.: You read with 6 GB/s from the san but nearly no block is new, so quite nothing has to be transfered which makes the traffic control (Network throttling) or the repository limits (for the repository itself) useless.having a separate value for restore sounds at repository level sounds more useful than configuring that per job.
You have IOPS, IOSize, Bandwith and Latency on the san. IOPS, IOSize and Bandwith affect the latency on your SAN. If you have very high IOPS usage, your latency can raise. If you have a very high bandwith Usage (Throughput), your latency can raise as well and this is our problem. We have impact on our production, yes.what issues is high bandwidth using causing? I read about "peaks", but what is the impact? Only a graph that is too high, or any real impact on your production?
Martin
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Hello,
Best regards,
Hannes
the 20 seconds exist, because that's what VMware provides.adjusting the sensivity would be nice, too. The mechanism is kinda slow and throttles too late.
did you maybe disable change block tracking in the advanced settings? Because per default, we only read blocks that changed.E.G.: You read with 6 GB/s from the san but nearly no block is new, so quite nothing has to be transfered
andand yes, one task is able to run at nearly full speed in a best case scenario
hmm, can you maybe share a support case number? because I cannot imagine how one task (or even a hand full) can have real impact on all-flash storageWe have impact on our production, yes.
Best regards,
Hannes
-
- Veeam Software
- Posts: 219
- Liked: 111 times
- Joined: Jun 29, 2015 9:21 am
- Full Name: Michael Paul
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Couple of questions:
How is the SAN presented? Is it NFS v3? Have seen many hot-add issues using that.
Does the issue also occur when using another mode such as network?
Apologies if you answered these questions already, I didn’t see anything upon reading the posts.
How is the SAN presented? Is it NFS v3? Have seen many hot-add issues using that.
Does the issue also occur when using another mode such as network?
Apologies if you answered these questions already, I didn’t see anything upon reading the posts.
-------------
Michael Paul
Veeam Data Cloud: Microsoft 365 Solution Engineer
Michael Paul
Veeam Data Cloud: Microsoft 365 Solution Engineer
-
- Novice
- Posts: 9
- Liked: 3 times
- Joined: Jan 03, 2021 6:54 pm
- Contact:
Re: Feature Request: Throttle disk read-rate of proxy
Hi,
The Veeam Developers had the idea to develop the function to throttle the destination Repository (MB/s), the network traffic to the repository (MB/S), and the maximum latency on the source (SAN). Why not the maximum read rate (MB/s) as on the destination Repo and the network traffic?
you refer to an example which I mentioned to show that the traffic control is not useful for all traffic. We use Change Block tracking but reset these on fulls.did you maybe disable change block tracking in the advanced settings? Because per default, we only read blocks that changed.
Sorry, but that has nothing todo with all-flash storage. As I mentioned earlier: You have IOPS, IOSize, Bandwith and Latency on the san. IOPS, IOSize and Bandwith affect the latency on your SAN. If you have very high IOPS usage, your latency can raise. If you have a very high bandwith Usage (Throughput), your latency can raise as well and this is our problem. We have impact on our production, yes. Every Storage has a limit on bandwith usage, all-flash or not all-flash. It doesn't make sense to argue about that.hmm, can you maybe share a support case number? because I cannot imagine how one task (or even a hand full) can have real impact on all-flash storage
Thats simple, it's not difficult. It all depends on your SANs capability regarding Bandwith/Throughput.I cannot imagine how one task (or even a hand full) can have real impact on all-flash storage
The Veeam Developers had the idea to develop the function to throttle the destination Repository (MB/s), the network traffic to the repository (MB/S), and the maximum latency on the source (SAN). Why not the maximum read rate (MB/s) as on the destination Repo and the network traffic?
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], stevetNOVUS and 157 guests