-
- Service Provider
- Posts: 30
- Liked: 1 time
- Joined: Jul 10, 2015 3:19 pm
- Full Name: Luis Rodriguez
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
We are seeing long merges via Backup Copy Jobs to a Cloud Connect target. I am wondering if this is the same issue as I am seeing.. Oddly talking to support we found out that the log for the job on the B&R side shows progress but the progress in the GUI does not update. GUI issue? This should get fixed.... Spent countless hours and looking around.....
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Luis, could you please share the case ID?
-
- Service Provider
- Posts: 30
- Liked: 1 time
- Joined: Jul 10, 2015 3:19 pm
- Full Name: Luis Rodriguez
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Hello Andreas, any feedback on this?andreasaster wrote:Got a feedback from Veeam today, they said, they found the root cause of the problem. I will have a phone call with them tomorrow to discuss the problem. I will update you then.
Andreas
-
- Service Provider
- Posts: 30
- Liked: 1 time
- Joined: Jul 10, 2015 3:19 pm
- Full Name: Luis Rodriguez
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Here is the case number(Case # 01030241)foggy wrote:Luis, could you please share the case ID?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
I was not able to find any information about the GUI issue you're mentioning in the case notes, could you please describe it in a bit more detail? Thanks.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Oct 12, 2015 2:48 pm
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
First post, be gentle.
I found a solution to this problem in my environment - it was a simple configuration issue. The copy job target is a remote site, so I'd set the remote repository bandwidth limit to a number smaller than the bandwidth between sites. Unknown to me the remote proxy uses that number to throttle the connection to the target storage as well. So, the locally connected target proxy/storage was limited to the max bandwidth limit of the repository. I removed the limit and configured network traffic rules instead, performance went from extremely poor to quite acceptable.
CPU usage on the proxy went from barely a trickle (1-3%) to 20-30+%. Network is also maxed out.
I found a solution to this problem in my environment - it was a simple configuration issue. The copy job target is a remote site, so I'd set the remote repository bandwidth limit to a number smaller than the bandwidth between sites. Unknown to me the remote proxy uses that number to throttle the connection to the target storage as well. So, the locally connected target proxy/storage was limited to the max bandwidth limit of the repository. I removed the limit and configured network traffic rules instead, performance went from extremely poor to quite acceptable.
CPU usage on the proxy went from barely a trickle (1-3%) to 20-30+%. Network is also maxed out.
-
- Enthusiast
- Posts: 43
- Liked: 14 times
- Joined: Dec 15, 2009 12:41 pm
- Full Name: DaLi
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Any update on this case. I also have the problem of very slow merging of oldest restore point.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
It's better to contact support with every particular case, since the reasons of the slow merge vary from one environment to another. Also, makes sense to update to v9 to verify the performance (if you haven't yet), since it has some improvements in this area.
-
- Enthusiast
- Posts: 43
- Liked: never
- Joined: Sep 03, 2015 8:00 am
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Just jumping in at the end of this tread.
v9
backup copy to cloud connect.
backup runs at pretty much the full speed of our connection.
We are currently 32 hours into job and its at 17% of creating GFS restore point, during which time no backups have made it off site.
v9
backup copy to cloud connect.
backup runs at pretty much the full speed of our connection.
We are currently 32 hours into job and its at 17% of creating GFS restore point, during which time no backups have made it off site.
-
- Product Manager
- Posts: 20413
- Liked: 2301 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Alternatively, you can make a backup copy job create GFS restore point by reading entire restore point from source backup instead of synthesizing it from increments. Though, that would result in increased traffic for sure. Thanks.
-
- Enthusiast
- Posts: 43
- Liked: never
- Joined: Sep 03, 2015 8:00 am
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
thats supposed to help how ?
the job has been running 41 hours and is at 18%
at this rate we are looking at 10% ever 24 hours or 5 days to create a reastore point.
during which time no offsite backups are happening ?
dosnt this sort of make the point of could connect a waste of time ?
i have a support ticket open for 5 hours with no responce.
the job has been running 41 hours and is at 18%
at this rate we are looking at 10% ever 24 hours or 5 days to create a reastore point.
during which time no offsite backups are happening ?
dosnt this sort of make the point of could connect a waste of time ?
i have a support ticket open for 5 hours with no responce.
-
- Product Manager
- Posts: 20413
- Liked: 2301 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
It might happen that target repository provided by SP cannot cope with synthetic activity well, thus, my recommendations about making active full backup instead.
Anyway, for now that's nothing but a speculation, so let's wait and see what support team find after logs' investigation.
Thanks.
Anyway, for now that's nothing but a speculation, so let's wait and see what support team find after logs' investigation.
Thanks.
-
- Expert
- Posts: 101
- Liked: 16 times
- Joined: Jan 30, 2014 3:37 pm
- Full Name: Joachim
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Similar problem here with v9 Update 1
Case 01751732
Our incremental backups started with 4 hours, now we are at 18 hours with almost the same VMs. For only 450GB transfered data (750 read, 7,7TB processed) the job still runs at 99% and "merging oldest incremental backup".
Bottlneck is "source", which is impossible: high performance 3PAR (4xFibreChannel). The target is an older IBM SAN, connected with 2xFibre Channel and capable of up to 400MByte/sec
Backupserver has 30% CPU, backupproxy (=repository) has 0% CPU....something is strange.
Comparison: single VM with a huge 13,5TB disk
Total incremental backup takes 40 minutes(!) compared to >18 hours(!!) of the other job!
Let's wait what support says.
Case 01751732
Our incremental backups started with 4 hours, now we are at 18 hours with almost the same VMs. For only 450GB transfered data (750 read, 7,7TB processed) the job still runs at 99% and "merging oldest incremental backup".
Bottlneck is "source", which is impossible: high performance 3PAR (4xFibreChannel). The target is an older IBM SAN, connected with 2xFibre Channel and capable of up to 400MByte/sec
Backupserver has 30% CPU, backupproxy (=repository) has 0% CPU....something is strange.
Comparison: single VM with a huge 13,5TB disk
Total incremental backup takes 40 minutes(!) compared to >18 hours(!!) of the other job!
Let's wait what support says.
-
- Novice
- Posts: 7
- Liked: 2 times
- Joined: Oct 21, 2015 10:16 am
- Full Name: Christoph Leitl
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
Hello Morgenstern72,
Since i had I similar problem I'd be interested in the findings after you have dealt with support. From what I read, you are experiencing lack of random iops on the target storage, resulting in long merge windows. This could be avoided by configuring additional weekly fulls (in other words, switch from fwd incremental forever to fwd incremental). At least, that's my conclusion.
(Why is the merge heavily random? Because it reads the backup from the same disks it writes to. If you look at the backups you will see that the merge starts after all backups have been read and a restore point has been created. So even if you interrupt the backup during the merge, you will be able to restore from restore point of that day).
Does the "older IBM SAN" support write caching (merge is about random ios, not sequential)?
So I'd be interested, please keep me updated. Have you considered switching back to weekly active fulls?
Since i had I similar problem I'd be interested in the findings after you have dealt with support. From what I read, you are experiencing lack of random iops on the target storage, resulting in long merge windows. This could be avoided by configuring additional weekly fulls (in other words, switch from fwd incremental forever to fwd incremental). At least, that's my conclusion.
(Why is the merge heavily random? Because it reads the backup from the same disks it writes to. If you look at the backups you will see that the merge starts after all backups have been read and a restore point has been created. So even if you interrupt the backup during the merge, you will be able to restore from restore point of that day).
Does the "older IBM SAN" support write caching (merge is about random ios, not sequential)?
So I'd be interested, please keep me updated. Have you considered switching back to weekly active fulls?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Merging oldest incremental Backup is painfully slow with
All, as per subject this thread is for v8, so please direct all v9 reports into the dedicated topic. There have been major changes between v8 and v9 in the transform engine, which was heavily optimized to remove bottlenecks around metadata processing, and now transform performance should be dependent strictly on backup repository IOPS capacity. Thanks!
Who is online
Users browsing this forum: Bing [Bot], olafurh and 111 guests