Discussions specific to the VMware vSphere hypervisor
jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

[MERGED] backups are slow after upgrade to v8

Post by jja » Nov 21, 2014 6:05 am

Hi!

I have not dug into this issue properly yet, but has anyone else noticed a big increase in backup time after upgrading?
Our backups before the upgrade starts at 20:00 and usually they were done between 22:00 and 23:00.
(unless there has been some big changes of course).

After upgrading the backups are not done until between 02:30 and 03:00.

I'll need to look into this more, but there is something slowing things down now.

-j

foggy
Veeam Software
Posts: 17795
Liked: 1490 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Post v8 Upgrade Observations

Post by foggy » Nov 21, 2014 9:40 am 1 person likes this post

Jannis, you may notice others also reporting similar behavior after the upgrade in the thread I'm merging your post into. You're right assuming that more info is needed to make any conclusions, so I encourage you to contact technical support to have a closer look at what happens in your environment. Log files should tell the particular operation that takes longer than expected and also possible reasons of that. Meanwhile, you can also compare job session logs as of v7 and v8 to see what has changed in terms of processing: bottleneck stats, processing speed, transport mode, etc.

jlester
Enthusiast
Posts: 45
Liked: 5 times
Joined: Mar 23, 2010 1:59 pm
Full Name: Jason Lester
Contact:

Re: Post v8 Upgrade Observations

Post by jlester » Nov 21, 2014 6:46 pm 1 person likes this post

My backup times are actually faster now, so it is definitely not hitting everyone. Backup Copy jobs are about the same.

hyvokar
Expert
Posts: 344
Liked: 21 times
Joined: Nov 21, 2014 10:05 pm
Contact:

Re: Post v8 Upgrade Observations

Post by hyvokar » Nov 21, 2014 10:59 pm

My backup times rocketed sky high after upgrade. Before upgrade backing up my file server took anything from 25 to 40minutes, now it takes over 70 minutes. I'm getting write speeds <30MB /s. My backup box is ibm x3650 m3 with 2x xeon, 48gb mem connected to ibm ds3400 with 2x 4gbps fc, which hosts 16x 300gb 10k rpm sas disks in raid6.

First I thought my storage crapped on me, but I still get over 350MB/s write speed when copying random images on it, so I guess storage is OK.

Need to keep eye on this for a couple of days.
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE

Gostev
SVP, Product Management
Posts: 24092
Liked: 3278 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post v8 Upgrade Observations

Post by Gostev » Nov 22, 2014 2:07 am

hyvokar wrote:First I thought my storage crapped on me, but I still get over 350MB/s write speed when copying random images on it, so I guess storage is OK.
Please note that sequential write workload cannot represent storage performance, which is measured in IOPS. However, backup storage is not necessarily causing the issue anyway, see what the backup job reports as a bottleneck.

lp@albersdruck.de
Enthusiast
Posts: 81
Liked: 32 times
Joined: Mar 25, 2013 7:37 pm
Full Name: Lars Pisanec
Contact:

Re: Post v8 Upgrade Observations

Post by lp@albersdruck.de » Nov 22, 2014 11:54 am

Support told me that my zero speed period comes from the fact that my reverse incremental backup chain has 120-180 restore points and at the start Veeam checks all restore points for something (forgot what) - which takes a while.

No solution yet, but at least I know why it is occurring.

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Post v8 Upgrade Observations

Post by jja » Nov 24, 2014 6:27 am

lp@albersdruck.de wrote:Support told me that my zero speed period comes from the fact that my reverse incremental backup chain has 120-180 restore points and at the start Veeam checks all restore points for something (forgot what) - which takes a while.

No solution yet, but at least I know why it is occurring.
This might explain our situation too.
We have usually more than 300 restore points on our backed up vm's.

-j

v.eremin
Product Manager
Posts: 16193
Liked: 1322 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Post v8 Upgrade Observations

Post by v.eremin » Nov 24, 2014 9:53 am

Just to be sure - your chains, guys, are incremental only, meaning, there are no periodic active full backups within? Thanks.

lp@albersdruck.de
Enthusiast
Posts: 81
Liked: 32 times
Joined: Mar 25, 2013 7:37 pm
Full Name: Lars Pisanec
Contact:

Re: Post v8 Upgrade Observations

Post by lp@albersdruck.de » Nov 24, 2014 6:30 pm

v.Eremin wrote:Just to be sure - your chains, guys, are incremental only, meaning, there are no periodic active full backups within? Thanks.
In my case: reverse incremental all the way, no fulls other than the latest run of course.

jja
Enthusiast
Posts: 45
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Re: Post v8 Upgrade Observations

Post by jja » Nov 25, 2014 6:47 am

v.Eremin wrote:Just to be sure - your chains, guys, are incremental only, meaning, there are no periodic active full backups within? Thanks.
Reversed incremental here as well.
No periodic full.

-j

JimmyO
Enthusiast
Posts: 55
Liked: 9 times
Joined: Apr 27, 2014 8:19 pm
Contact:

Re: Post v8 Upgrade Observations

Post by JimmyO » Nov 25, 2014 7:04 am

Oh no - I upgraded to v8 yesterday hoping for better performance. Now my backups are approx 200-300% slower.

Forward incremental, Direct Attached Storage.

Looking at the job progress, it seems that there isn´t any real bottleneck, CPU, memory, network and storage are not heavily loaded.
It seems that Veeam is simply waiting for something to happen (whatever that might be...)

v.eremin
Product Manager
Posts: 16193
Liked: 1322 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Post v8 Upgrade Observations

Post by v.eremin » Nov 25, 2014 8:50 am

Log investigation might shed a light on the root cause of the experienced behaviour. So, please, open a ticket with our support team. Thanks.

Gostev
SVP, Product Management
Posts: 24092
Liked: 3278 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post v8 Upgrade Observations

Post by Gostev » Nov 28, 2014 3:39 pm

The issue has been researched, and is confirmed to be caused by the same bug I have covered in the forum digest last week.
Gostev wrote:v8: Observations of slower incremental backups after upgrading to v8 in certain backup modes (actual data copy is fast, but each incremental run takes a long time to initialize). This seems to be mostly reported by the deduplicating storage users, namely EMC Data Domain - but from what I know right now, it may potentially impact anyone with backup storage having poor random I/O performance. Possibly cause is job doing unnecessary reads from previous backup files (investigation is still underway though).
The longer existing backup chain you have, the longer backup job initialization will take. This is also why creating the new backup job helps (but only temporarily, until more restore points will be created).

Private hot fix for this issue is now available through support, please refer to bug ID 38623 when talking to them.

citius
Lurker
Posts: 1
Liked: never
Joined: Nov 28, 2014 11:12 pm
Full Name: Kristian Skogh
Contact:

Re: Post v8 Upgrade Observations

Post by citius » Nov 28, 2014 11:15 pm

I have the solution from Veeam on the jobs that takes long time.
They sent me a mail today saying how to fix this.

[REMOVED]

Gostev
SVP, Product Management
Posts: 24092
Liked: 3278 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post v8 Upgrade Observations

Post by Gostev » Nov 28, 2014 11:47 pm

Hi, Kristian. Thank you for your post. As I mentioned in my previous post, the hot fix is private and as such is not intended for a public distribution (otherwise, I would have just posted it myself), so please don't share it.

Private hot fixes get limited testing in the specific scenario, and as such they may conflict with other hot fixes, or cause problems in unrelated product areas, such as memory leaks and so on. This is why we do controlled distribution by sending hot fixes directly to the customers who are confirmed to be impacted by the specific issue.

This and other private hot fixes will be included in the Patch #1 that will be distributed publicly. Patches get full regressive testing before they are released, to ensure no product functionality is impacted by the changes, and multiple hot fixes are not conflicting with each other.

Sometimes, we can make specific hot fixes available earlier as a support KB article, but this can only happen after we do enough pilot deployments. However, this specific hot fix has left R&D just a few hours ago...

Thanks!

Post Reply

Who is online

Users browsing this forum: sherzig and 23 guests