Comprehensive data protection for all workloads
Peter Just
Influencer
Posts: 13
Liked: 2 times
Joined: Apr 04, 2014 2:29 am
Full Name: Peter Just
Contact:

Post 9.5 issues

Post by Peter Just »

Has anyone experienced any issues after upgrading to 9.5? We upgraded from 9.0 last week. 9.0 worked very well, no complaints. With 9.5 we are observing slow response/sluggishness with the Backup Server and console. Throughput from the proxies is diminished. Jobs that usually took 40-50 minutes on average are at times taking longer than that or timing out. As a result, I am seeing more jobs running concurrently which is impacting things as well. I have a ticket ID# 01983297. However, we are not making much progress.

Any help would be appreciated.
mkretzer
Veeam Legend
Posts: 1140
Liked: 387 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Post 9.5 issues

Post by mkretzer »

The only slowdown we experienced is in the restore context (Backups/Disk). Everything else is very fast..
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Post 9.5 issues

Post by foggy »

Hi Peter, the behavior you're describing is not expected, so please continue looking into this with the help of our engineers. Meanwhile, you can check whether the same transport modes are used by proxies, probably temporarily decrease the max number of assigned tasks to involved servers. Is your backup server used as proxy as well? What about the amount of RAM/CPU utilized on it during backups?
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

Peter Just wrote:With 9.5 we are observing slow response/sluggishness with the Backup Server and console.
Did you wait enough time after finishing setup for the configuration database update to complete, as per the upgrade procedure?
Peter Just
Influencer
Posts: 13
Liked: 2 times
Joined: Apr 04, 2014 2:29 am
Full Name: Peter Just
Contact:

Re: Post 9.5 issues

Post by Peter Just » 1 person likes this post

Gostev- I believe so. Is there a way to verify whether it ran properly?
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Post 9.5 issues

Post by foggy »

I believe support will be able to verify that.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

Mine seems to be sluggish/slow. I rebooted again and we'll see how it plays out. I already know my upgrade apparently didn't complete properly as I had about 8 jobs that failed to update the DB entries and needed New jobs created and Active fulls run. I was unable to use the existing chains.

-Nick
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

Ok, the rest of my jobs disappeared from the DB and now run as fulls as if the jobs never existed. Not happy!
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Post 9.5 issues

Post by veremin »

Then, it's high time you contact our support team for further investigation. Thanks.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

Already have a case open and a dedicated thread under vmware.

-Nick
Peter Just
Influencer
Posts: 13
Liked: 2 times
Joined: Apr 04, 2014 2:29 am
Full Name: Peter Just
Contact:

Re: Post 9.5 issues

Post by Peter Just »

Just giving this a bump. I worked with support and we've implemented a few fixes. However, I am still seeing issues with throughput and overall backup performance. Has anyone experienced issues with degraded performance while using DirectSan and 9.5?
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx » 1 person likes this post

I've abandoned Veeam v9.5 and rolled back to v9u2 which in itself was a process of recreating all my jobs, somewhere around 125ish.

In addition, prior to rolling back I was able to compare job logs and saw a 2/3 decrease in performance. Prior to upgrade one job ran between 1GB/sec and 650MB/sec, post upgrade 100-250MB/sec. After the rollback my performance was again at the 1GB/sec and 650MB/sec levels. Nothing in the environment changed except the software.


-Nick
guitarfish
Enthusiast
Posts: 98
Liked: 11 times
Joined: Mar 06, 2013 4:12 pm
Contact:

Re: Post 9.5 issues

Post by guitarfish »

I upgraded to v9.5 a few days ago. I run Backup Copy jobs to a Veeam cloud provider for DR replication purposes, and I had to wait for them to upgrade to 9.5 first. My issues:

#1 (MAJOR) - The Backup Copy process uploaded new Full backups for 2 VMs instead of incremental, and it did this 2x for one VM, 3x for the other. Now finally these are running incremental. It was a painful process, as these uploads spilled over into the workday and severely slowed internet access for the users. I have a case open, but don’t know if I’ll pursue it, as long as things run incremental as required.

#2 (MINOR) - Not sure how to describe this one. I have 4 sites. HQ has a number of hosts & VM backup jobs, the other 3 sites have 1 VM/backup job. As I upgraded each host to v9.5, a Components Update screen appeared which listed all Backup Copy jobs, and indicated “Legacy backup”, they needed to be upgraded.
Image

I upgraded the jobs that pertained to the site being upgraded as I did each site, and that seemed to work.
Now all upgrades are done now, but each site thinks the Backup Copy jobs for the other sites still need to be upgraded. It still shows them as Legacy backup even though they are upgraded. Rescanning the repository doesn’t help. And if I try to run the upgrade again, it fails with an error that “Unable to map service provider side to tenant cache because it is already mapped...”

#3 (MINOR) – Probably a consequence of the previous issue. Backup Copy jobs show in duplicate, with one having a yellow mark, for any jobs that Veeam thinks are still in Legacy mode.
ImagePrevious versions of Veeam would had a “Remove from configuration” option but that’s not available anymore. I’m not clicking Delete from Disk until I talk to Support.

In summary – no issues so far with local backup, but cloud BC jobs have not gone well.
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

SyNtAxx wrote:I've abandoned Veeam v9.5 and rolled back to v9u2 which in itself was a process of recreating all my jobs, somewhere around 125ish.

In addition, prior to rolling back I was able to compare job logs and saw a 2/3 decrease in performance. Prior to upgrade one job ran between 1GB/sec and 650MB/sec, post upgrade 100-250MB/sec. After the rollback my performance was again at the 1GB/sec and 650MB/sec levels. Nothing in the environment changed except the software.
Nick, this is strange as vast majority of users who have upgraded to 9.5 are reporting the exact opposite, 50-100% performance improvement. Can we see the logs to understand the difference between how the two version behave in your environment? If you opened a support case for this, what was the case ID?
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

Gostev,

Case # 01984571

Unfortunately I removed that snapshot after I rolled back to the v9u2 snap I made before upgrading. I did upload some logs for that case, but I don't think they where jobs logs from what I can recall. the best I might be able to do describe my environment.

1) Virtual Veeam Manager, proxy function disabled. Purely logistical operation.
2) 1 Mega physical proxy, 384GB RAM, 28 Physical cores + HT, 20Gbps LACP ethernet, 2 @ 8Gbps FC to the SAN Fabric
3) 3PAR V400 AND 7400
4) HPE StoreOnce 6500 @ 1PB disk: Proxy is set to use SMB/CIFS shares instead of catalyst because of lack of catalyst copy.
5) I use Direct SAN backups.

I wish I held on to the snap but I didnt :-/

NIck
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

Thanks, hopefully the required logs are all there. We will investigate.
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

According to the logs, your 9.5 jobs are using hot add transport. So your "mega proxy" is not used at all - while hot add proxy probably lacks CPU, which perhaps explains this dramatic difference in performance between 9.0 and 9.5. So the troubleshooting vector will be to investigate why mega proxy is not being used... may its components were not update to 9.5 or something like this.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

I have 2 vProxies I use for 4 dedicated servers which are the only 4 servers to use vProxies. I left those out of the original description for that reason. I can assure you my 'mega proxy' was in control of the jobs and processing using SAN snaphots. I studied the v9.5 environment for a week before deciding to roll back. I'm quite certain I would have noticed. In addition, while jobs were running I was logged onto the the 'mega proxy' watching the server stats: CPU/NIC/Disk and SAN exports. Further, when I decided to roll back I had to manually remove all the v9.5 components from all my proxies and redeploy the v9u2 agents because an error was rightfully displayed indicating the v9.5 version was incompatible with v9U2 console. So I respectfully disagree with your assertion my physical proxy was not in use. I will agree the log set that was requested/uploaded may not reflect this because they may not have included actual job logs.
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

OK. Well, if those are the only logs left from your 9.5 install, then there's not much we can do to troubleshoot the issue further... it must be something unique to your environment, because with 3PAR specifically, we see 2x speed improvement backing up from SAN snapshots with 9.5 in our own lab - and other customers with 3PAR are reporting the same increase. So, almost 3x decrease instead is definitely unusual.
guitarfish
Enthusiast
Posts: 98
Liked: 11 times
Joined: Mar 06, 2013 4:12 pm
Contact:

Re: Post 9.5 issues

Post by guitarfish »

In v9 & previous, you could go to "Files" and access files in the cloud repository and delete a file, rescan the repository, and then go to Backups > Cloud, and the restore point for the deleted file would have an indictor that it was missing. You could Remove from inventory or Delete from disk IIRC to clean it up. Now in 9.5, if you delete a file, the restore point still shows in Backups > Cloud, with not indication that it was been removed. There's also no option to delete/remove. I found the cloud part of the interface a bit quirky to use sometimes, but I could do what I needed. It's been changed now, and I don't know why. IMO it's worse, and there's no way to do what I need to do. I am really, really disappointed with 9.5 so far, which is surprising, because the upgrades since v7 have always been good. I don't understand why these changes were made.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: Post 9.5 issues

Post by SyNtAxx »

Gostev wrote:OK. Well, if those are the only logs left from your 9.5 install, then there's not much we can do to troubleshoot the issue further... it must be something unique to your environment, because with 3PAR specifically, we see 2x speed improvement backing up from SAN snapshots with 9.5 in our own lab - and other customers with 3PAR are reporting the same increase. So, almost 3x decrease instead is definitely unusual.
Would there be any useful logs we could gather from the physical proxy? Nothing was changed on it other than the software agents, upgraded then reinstall of v9u2.

Nick
Gostev
Chief Product Officer
Posts: 31455
Liked: 6646 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Post 9.5 issues

Post by Gostev »

guitarfish wrote:I am really, really disappointed with 9.5 so far
Sorry to hear that. Purely by downloads to support cases ratio after almost 40000 downloads, 9.5 has been the highest quality release we ever had - and by far. But of course, you can't see that from the forums, as people rarely come here to say how happy they are with the new version ;)
guitarfish wrote:I don't understand why these changes were made.
Cloud Connect was rearchitected pretty heavily in 9.5 to support a number of new features on the service provider side (such as scale-out backup repository, per-VM backup file chains, advanced ReFS integration and so on). These features will improve performance and reduce your Cloud Connect bill, so there's a good reason why these changes are being made. Of course, as with any significant development, there is a chance of bugs introduced - be sure to report them through support, and we will address promptly in the upcoming updates!
SyNtAxx wrote:Would there be any useful logs we could gather from the physical proxy? Nothing was changed on it other than the software agents, upgraded then reinstall of v9u2.
Yes, but only if the issue is obvious (such as if NBD instead of direct SAN transport was used). If you are completely sure the proxy was using the correct transport mode (backup from storage snapshots), then the next troubleshooting step would be to collect performance debug logs - and this requires live deployment.
savmil
Novice
Posts: 3
Liked: never
Joined: Dec 01, 2016 10:39 pm
Full Name: Mil Sav

Re: Post 9.5 issues

Post by savmil »

Hi all

In upgrading to 9.5 we've experienced a few issues (so far):

* Bug discovered with our Secondary Tape jobs (Case # 01990047) --- it is now writing a Monthly tape every day with zero data written (Currently under investigation by Veeam) on top of my daily incremental.
* Tape job properties have been reset to default, especially with the eject media checked. We run multiple tape jobs in sequence and this had caused tape lock errors.
* Backup Job --> Active Full backup Custom schedules in the Advanced Storage tab have been reset back to default (first Monday every month) for every single job. I had discovered End-Of-Year backups running this morning instead of next month wasting space (both disk and tape) and time to reset the jobs.

Anyone else a victim of this ??

Cheers
Milan
Butha
Enthusiast
Posts: 39
Liked: 20 times
Joined: Oct 03, 2012 10:59 am
Full Name: Butha van der Merwe
Contact:

Re: Post 9.5 issues

Post by Butha »

Only had 2 issues so far, and a 3rd one that I was hoping might be solved by 9.5, but seems not.

One issue is more on the VeeamOne side - I'll mention it anyways for anybody reading this although it should go in that thread.

We run VeeamOne, Enterprise Manage and B&R on same VM host (only management, no proxy or transport role)

1. VeeamOne Report server service didn't "close down" on upgrade, you could continue but afterwards it fails to start - the solution was a registry entry to increase timeout values to start services - very strange indeed that this is needed, but it solved it.

2. The Remote Consoles for both products now regularly "disconnects" and you have to reconnect/close down and re-open - quite annoying more than anything else - doesn't effect functionality.

3. We have a case open for a while now being unable to replicate a VM with a virtual disk >2TB (in fact it's 3TB) using Hotadd mode (and Storage Snapshots). The writer process at target san "stalls" although it doesn't log this - only workaround is using network mode on the target proxy (We use Netapp + storage integration both source/target) - which runs slower than hotadd, and affects all other replication jobs using that same proxy. Many hours spend by support running "benchmarks" on storage - but we have no issues with any other jobs all running at the same time - infrastructure hasn't changed (in fact target SAN has doubled in spindle count - other jobs increased in speed). Initial responses centered around Vmware using "SESPARSE" mode for disks >2TB - and than Veeam or the vddk library used handles this differently, and was hoping in 9.5 it would work fine, but not the case. Engineer still busy though so cannot comment on possible fix.

Other than that no issues.
McClane
Expert
Posts: 106
Liked: 11 times
Joined: Jun 20, 2009 12:47 pm
Contact:

Re: Post 9.5 issues

Post by McClane » 1 person likes this post

savmil wrote:
* Bug discovered with our Secondary Tape jobs (Case # 01990047) --- it is now writing a Monthly tape every day with zero data written (Currently under investigation by Veeam) on top of my daily incremental.
Hi,

Link them to ticket #01981228. I had the same problem and got a hotfix.
wsuarez
Lurker
Posts: 2
Liked: never
Joined: Feb 23, 2012 1:41 pm
Full Name: Bill Suarez
Contact:

Re: Post 9.5 issues

Post by wsuarez »

We have seen the performance drops (significant) along with multiple job corruption and the inability to restore from existing backups. Also seeing issues where jobs with stop writing output data to Veeam Cloud and eventually fail with "closed connections".

Cases are opened and have been escalated. Support has been very good working with us but as of yet we have no root cause or resolution.
wsuarez
Lurker
Posts: 2
Liked: never
Joined: Feb 23, 2012 1:41 pm
Full Name: Bill Suarez
Contact:

Re: Post 9.5 issues

Post by wsuarez »

When a job times out, after a delay with no data being written, we see: "Error: Transmission pipeline hanged, aborting process"
plandata_at
Enthusiast
Posts: 66
Liked: 10 times
Joined: Jan 26, 2016 2:48 pm
Full Name: Plandata Datenverarbeitungs GmbH
Contact:

Re: Post 9.5 issues

Post by plandata_at »

Backup 2 Tape Job Speed before upgrade to 9.5 was about 520MB/sec, after upgrade it's less then 300Mb/sec, we doubled the amount of time tape Jobs Need. Open ticket for days now, but still no solution. (backup from direct attached sas reporitory to direct attached sas tape drives with parallel processing enabled) case nr. #01985204

I don't know if it has todo anything with veeam 9.5, but before update all backups from secondary snapvault Destination (NetApp) worked fine, after upgrade we started to get NFS Status Code 2 error on several Jobs on several vmdks (always the same vmdks) case #01991750. Veeam Support says it's a NetApp Problem, but i can boot the vmdk without any Problem from snapvault detsination when i set it writable. .....

From my Point of view. if you don't need feautires in 9.5, i couldn't experience any little improvement in performance. So bether wait.
skrause
Veteran
Posts: 487
Liked: 105 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Post 9.5 issues

Post by skrause »

So I have a thread about this already, but since 9.5, most of my GFS Backup Copy jobs (copy entire from source) are now running a full on both Saturday and Sunday when they are set to select a restore point closest to 0200-0400 on Sunday. Previously they had been running on Saturday which was a different issue as I need these to copy the data backed up during the Saturday night backup window. Now I have lost 2 weeks of my 5 weeks of archive backup points because they were removed per the retention policy automatically.

Case# 01994952

Otherwise, everything seems to be working just fine.
Steve Krause
Veeam Certified Architect
sullivas
Influencer
Posts: 10
Liked: 1 time
Joined: Nov 13, 2014 7:14 pm
Contact:

Re: Post 9.5 issues

Post by sullivas » 1 person likes this post

savmil wrote:
* Backup Job --> Active Full backup Custom schedules in the Advanced Storage tab have been reset back to default (first Monday every month) for every single job. I had discovered End-Of-Year backups running this morning instead of next month wasting space (both disk and tape) and time to reset the jobs.

Milan
I just discovered this bug with my backup jobs as well. Today being the first Monday of December EVERYTHING is trying to do an active full this morning. No bueno....waiting for the jobs to terminate fully now before trying to clean things up so I don't run out of disk space on my repository from the unexpected influx of data.
Locked

Who is online

Users browsing this forum: Amazon [Bot], ENBS, MarioZ, xSOU1 and 185 guests