Comprehensive data protection for all workloads
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Veeam v9 Backup Performance Slow

Post by cbc-tgschultz » 2 people like this post

Is anyone else having issues with performance since upgrading to v9?

For a week now we've been unable to do our daily incrementals because jobs refuse to finish in a reasonable time frame. I've taken to disabling all jobs and running them one at a time as active fulls while working with support, who thus far have had very little productive input. Case number: 01794249

Here's the kind of thing I'm seeing:
Image

This is typical of the issue I'm having. Some things will run fine, but a lot of the time veeam will just stop sending network traffic entirely, drop throughput to 0, and blame the network for being a bottleneck. Meanwhile I'm watching my storage system (Linux repository) idle.
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Veeam v9 Backup Performance Slow

Post by PTide »

Hi,

Please describe you backup repository configuration, proxy configuration and backup network.

Thank you.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Sure, it should all be in the ticket, but I'll describe it again:

The backup repository is a Docker container running Ubuntu. It is running on a Synology RS2414RP+ appliance with 12 8TB Seagate 5400RPM drives in a RAID6 connected via 1Gb/s ethernet to a Cisco Nexus 3548. It is configured for a maximum of 2 concurrent connections.

The proxy is the B&R server. It is a 4proc VM with 24GB of Ram running in a 4node vSphere 6 environment connected via 10Gb to the same Nexus switch. It is configured for a maximum of 2 concurrent tasks.

Again, since Veeam is confirmed to not be sending any data during these 0-throughput times, I don't see what it matters what the storage or network is. The problem is that Veeam is not putting any data on the network. The logs (I've sent many) straight up say the repository agent is timing out waiting for blocks. Is Veeam not able to retrieve blocks from vSphere? Maybe, but it also has the same issue with the Hyper-V jobs, and doesn't report any issue in the logs that I can see. When it does send data, it often does so at <1Mb/s

So, to reiterate: Veeam is not sending data to the backup repository for long periods of time during jobs.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Based on correspondence with support, it would seem that the issue was caused by having the B&R server and the proxy be one and the same. This configuration was working for us in v8, but I'm guessing v9 adds additional load that made it all too much to handle. Adding a new 16GB 4proc 3.40Ghz VM as a dedicated proxy and disabling the B&R server as a proxy seems to have had a dramatic effect on performance. Why it took the better part of a week to get to this point with support I can only guess.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Sadly I must report that I am still seeing performance issues even with the new, separate, proxy.

Initial results were promising, when running a single job it would often have throughput at or over 100MB/s. Unfortunately, as soon as a second job that uses that repository kicks off, performance suffers greatly. Before all this, we could run 2 server jobs at once and, though each had only about half the throughput they normally would, they would still run smoothly. Now, whenever I attempt to run two at once, performance for both jobs drops to <1MB/s.

This on its own would almost be tolerable, however I am now seeing that longer running jobs are showing the same issue as before, even when they are the only job running:

Image

Note how the job seems to begin with promising performance, only to later become tragically slow. Veeam keeps blaming the network, which makes no sense. If Veeam would better report what the actual problem is I suspect I'd be much closer to remedying it. Given the amount of time I have spent on this issue, the impact it is having on our ability to backup production data, and the relative lack of progress, I expect I'll soon be forced to revert to v8 and hope that it's just something about v9 that is causing it. Support assures me there haven't been any changes that should be having a performance effect, but then again our B&R server/proxy combo was working just fine before v9, so I don't know if I can trust them on that.

If anyone with the power to do so could escalate this ticket, please do. You may notice this post is late at night on a saturday. I'd rather not have many future weekend hours blown dealing with this.
rasmusan
Enthusiast
Posts: 48
Liked: never
Joined: Jan 20, 2015 9:03 pm
Full Name: Rasmus Andersen
Contact:

Re: Veeam v9 Backup Performance Slow

Post by rasmusan »

Hi

I had similar issue at a Customer, though it was not after an upgrade, but otherwise much of the same symptoms as you describe. Also i had a support case that really did not conclude anything, until i found out the antivirus was doing some kind of network traffic scanning that somehow hit performance of veeam traffic very hard. After disabling that all was well...
PTide
Product Manager
Posts: 6408
Liked: 724 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Veeam v9 Backup Performance Slow

Post by PTide »

Hi,

I want to check something before suggesting you to escalate the support case - you've mentioned that both your proxy and repository are configured for a maximum of 2 concurrent tasks, while in your job statistics I can see that there 4 VMs in that job in total which is twice as many as the number of concurrent tasks allowed. Also I see things like "Waiting for backup infrastructure resources availability", "Resource not ready: Backup repository", "Could not allocate processing resources within allotted timeout (82800 sec) Error: Timed out waiting for backup infrastructure resources to become available (82800 sec)" and so on. Based on this I suggest you first to raise the limit of tasks to 4 (both for proxy and repository) and see if that helps, because it's clear that you have at least 4 vmdks to be processed and only 2 proxy and repo slots for allocation. Please try to change the limit and if that gives no positive impact on the performance then kindly use "Talk to manager" button to escalate the case to the higher support Tier.

Thank you
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Your suggestion doesn't make any sense to me. Even with only a single task running the job was super slow. How would allowing more jobs have helped? Indeed, my experience has been that even if a job is running well, as soon as I kick off a second job everything tanks hard.

Anyway, thanks for the suggestion rasmusan, but there is no antivirus running on the B&R server, proxy, or repository. I have extensively verified that no process other than Veeam is gobbling up resources.

Over the weekend we finally had some indication as to what the remaining problem might be. It would be an issue with the storage array causing high write latency under certain circumstances. If true, it would mean that Veeam is not at fault for the continuing issue, though I do wonder why neither I, nor support, could surface the issue via Veeam's logs and reporting. Perhaps there is some improvement to be had there?
dellock6
VeeaMVP
Posts: 6137
Liked: 1928 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Veeam v9 Backup Performance Slow

Post by dellock6 » 1 person likes this post

Tanner, looking at the writing graph, the times with no activity could be indeed waiting times for the array to be ready again to write.
I'd suggest to be honest the opposite, to verify if the array is overloaded: what about "reducing" the concurrent threads? With less threads hitting the storage it may be able to flush its cache more efficiently and it may (not guaranteed) perform better.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz » 1 person likes this post

While the repository is, under normal circumstances, set for 2 concurrent tasks, it has been set to 1 since this issue began.

After observing over the past few days, I am quite confident that the repository was to blame for the additional issues we experienced even after separating the proxy from the B&R server. I believe there is something wrong with one of the drive slots, though it is possibly the disk, causing high write latency. When this slot errored enough to be auto-disabled our performance returned to normal. We are now operating smoothly again.

I am still disappointed this issue didn't surface during our collective troubleshooting. Veeam kept reporting "Network" as the issue when, given the problem, one would expect it to be "Target". Additionally, neither I nor support spotted anything in the logs to indicate a high write latency.
larry
Veteran
Posts: 387
Liked: 97 times
Joined: Mar 24, 2010 5:47 pm
Full Name: Larry Walker
Contact:

Re: Veeam v9 Backup Performance Slow

Post by larry »

Had same issue, graph looked the same with gaps, speeds would drop to 1 mb or zero. My issue was also a failing drive in the repository, veeam did show target was the issue. What got us was the server had two repositories on two disks but both were some what effected but one was worse. The desktop of that console would also get laggy.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v9 Backup Performance Slow

Post by Gostev » 1 person likes this post

One thing we can be sure is that this performance issue is not specific to v9, because I just saw someone post a screenshots with 2GB/s processing rate for full backup with v9... that's like 200 times faster than what you are getting during most of the time, looking at the graphs above.

Performance numbers around 10 MB/s always make me suspect a network problem first, most commonly due to some port on switch or NIC failing over from 1Gb to 100Mb.
MerlIT
Novice
Posts: 5
Liked: never
Joined: Jul 23, 2014 7:06 pm
Full Name: Andrew Sims
Contact:

Re: Veeam v9 Backup Performance Slow

Post by MerlIT »

May be of no relevance, but I have regular "slow backups" under v8 using a Synology repository attached via two path MPO iSCSI, giving very similar throughputs to yours and the same bottleneck. Multiple cases with Veeam support (most recently 00984158) never got to the bottom of this. Slow backups are accompanied by low transmission rates on the iSCSI connection. If I log on to the Synology or host 2012.R2 server during the "slow backup", there is a good chance that the iSCSI performance will revert to full speed and the backup continue at the normal pace. If I start getting slow backups, then they continue until the host 2012.R2 server is rebooted. I think there is an issue with the MS lazywriter on 2012.R2 and iSCSI (either at Synology or MS end). There are no specific errors, dropped packets etc to help troubleshoot this.

All my Veeam components, bar the repository, sit on the one 2012.R2 host.

I was hoping that the recent Synology upgrade plus v9 would solve this once and for all!
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam v9 Backup Performance Slow

Post by Andreas Neufert »

Tanner, can you please have a look at the Jobs. When you see that there is no data processed. What is the job status then. What did the job statistic say?
MSc
Enthusiast
Posts: 30
Liked: 9 times
Joined: Jan 19, 2012 2:46 pm
Full Name: Martin Schenker
Location: Germany
Contact:

Re: Veeam v9 Backup Performance Slow

Post by MSc »

Hi all!

Synology had major problems with their DSM 6.0 and iSCSI mapping, our system just came back to a useable status with DSM 6.0 Update 7. Please check that you're up-to-date with the DSM versions! Just a thought... as our backups are going well again.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Gostev,

Naturally since one user isn't having an issue with v9, that means no one else could possibly be operating under different circumstances that cause an issue to surface. Only makes sense.

If you can't tell, that was sarcasm. I may be a little frustrated at support trying to tell me nothing is wrong while I'm watching my backups fail.

And for the record the bottleneck on the network is 1Gb, and since it has never happened before, and I have no indication of it ever happening on the source servers (10Gb), proxy (10Gb), or target (1Gb), in any log or other diagnostic information source, I very much doubt the issue is being caused by a sudden drop to 100Mb. Moreover, when it happens, 100Mb would be a great alternative to the kinds of speeds I'm seeing.

MSc, I am not using iSCSI on the Synology, but it is up to date. It was also most certainly responsible for a portion of my issues, but not, apparently, all of them.

Andreas Neufert, when the jobs are in this state they are status: running. They act as though they are working normally, except of course they aren't really passing any data and for all intents and purposes are failed.

I can't help but think that if Veeam were a little better about exposing what the heck it is actually doing at any given time these things might be easier to sort out without weeks of back-and-forth with support. As mentioned before, even with multiple logs of the problem support never caught that write latency on the target was ridiculously high (now solved), so I don't find it surprising that more subtle issues go unnoticed. Another example, I've never had anyone actually tell me what "Agent port is not recognized" actually means. I still run into that one pretty frequently.
marcseitz
Influencer
Posts: 18
Liked: 5 times
Joined: Apr 04, 2012 11:17 am
Full Name: Marc Seitz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by marcseitz » 1 person likes this post

Hi,

we are running B&R9 since almost 2month now - And since the update we have performance-issues, too!
I'm still working with the support to figure out what's going wrong in our environment.(Case #01754045)

Some information about our environment:
Repositories: NetApp ~200TB, 6 physical Proxies Win2012R2 (2x6Core, 96GB Mem), B&R-Server Win2008R2, Daily VMs for Backup ~1600

What we've figured out:
- Since B&R9 the backup jobs are handled slower than before
- Example: The "Saving GuestMembers.xml" takes up to 15min (per VM!!)

You can copy out the Log from one VM (where the process is listet, Creating Snapshot, Releasing Guest.....) into notepad.
Then you will see the timestamps when that particiular task starts. So you can see if you have the same problem than we have.
The log will look like this:

Code: Select all

12.04.2016 09:19:53 :: Removing VM snapshot
12.04.2016 09:20:24 :: Saving GuestMembers.xml
[i]==> 09 minutes 16 seconds doing nothing???[/i]
12.04.2016 09:29:20 :: Finalizing
12.04.2016 09:29:30 :: Swap file blocks skipped: 125,0 MB
12.04.2016 09:29:31 :: Busy: Source 66% > Proxy 12% > Network 38% > Target 40%
12.04.2016 09:29:31 :: Primary bottleneck: Source
12.04.2016 09:29:31 :: Network traffic verification detected no corrupted blocks
12.04.2016 09:29:31 :: Processing finished at 12.04.2016 09:29:31
If I will have any news about the performance-issue I'll post this.

Regards,
Marc
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

I do not appear to have the same issue as you. All of my 'Veeam is doing nothing at all' time is happening during the actual disk data transfer, so the console reports nothing.

Code: Select all

5/23/2016 8:09:30 AM :: Queued for processing at 5/23/2016 8:09:30 AM 
5/23/2016 8:09:30 AM :: Required backup infrastructure resources have been assigned 
5/23/2016 8:09:35 AM :: VM processing started at 5/23/2016 8:09:35 AM 
5/23/2016 8:09:35 AM :: VM size: 1.1 TB (962.0 GB used) 
5/23/2016 8:09:45 AM :: Getting VM info from vSphere 
5/23/2016 8:09:57 AM :: Using guest interaction proxy veeam.clearybuilding.us (Same subnet) 
5/23/2016 8:10:12 AM :: Inventorying guest system 
5/23/2016 8:10:13 AM :: Preparing guest for hot backup 
5/23/2016 8:10:16 AM :: Creating snapshot 
5/23/2016 8:10:30 AM :: Releasing guest 
5/23/2016 8:10:30 AM :: Getting list of guest file system local users 
5/23/2016 8:10:52 AM :: Saving [vSphere-VMs] ClearyShares/ClearyShares.vmx 
5/23/2016 8:10:55 AM :: Saving [vSphere-VMs] ClearyShares/ClearyShares.vmxf 
5/23/2016 8:10:57 AM :: Saving [vSphere-VMs] ClearyShares/ClearyShares.nvram 
5/23/2016 8:11:00 AM :: Using backup proxy VeeamVeronaProxy for disk Hard disk 1 [hotadd] 
5/23/2016 8:11:39 AM :: Hard disk 1 (100.0 GB) 18.3 GB read at 57 MB/s [CBT]
5/23/2016 8:17:16 AM :: Using backup proxy VeeamVeronaProxy for disk Hard disk 2 [hotadd] 
5/23/2016 8:17:49 AM :: Hard disk 2 (1.0 TB) 806.9 GB read at 54 MB/s [CBT]
I'm watching it happen right now without the same latency issues we had on the target before. That last line is a lie, see, it started around 60-70MB/s, it has slowed to jumping between 4-5MB/s and 15-25MB/s in the last hour or so and switched from the primary bottleneck being "Source", as it has been for the past week and as it usually is, to being "Network", which is bull. I still can't rule out the target though.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Veeam v9 Backup Performance Slow

Post by tsightler » 2 people like this post

Can you please check the memory utilization of your Ubuntu container during the periods of slow performance? Looking at the logs it appears that this container is assigned only 2GB of RAM, which is well below the recommended minimum of 4GB per active job. Your graph is indicative of a repository that has run out of memory to store the deduplicaiton hash and this would potentially explain why "network" is showing as the bottleneck as "network" indicates difficult with Veeam attempting to transfer data from the source data move (the proxy) to the target data mover (the VeeamAgent running in the Docker instance on the Synology). I would love to see some data on what free memory looks like both when the job is running well, and when it collapses. Also, do you happen to be uses WAN or LAN storage optimization rather than "local"?
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Storage optimization is "Local target" for all backup jobs, and Compression level is "Optimal".

The device only has 2GB of memory. It is a limitation of the platform. Your explanation makes sense, however, since we haven't had this issue in the past it would suggest that some part of the process has changed and now requires more ram than previously. To be fair, I cannot be certain if Veeam or DSM 6 is to blame there.

I will test by configuring an older server with significantly higher ram as the storage repository, using NFS to connect to the array. That should tell us if this is part of our issue or not.
CCastellanos
Influencer
Posts: 11
Liked: never
Joined: Sep 05, 2012 8:44 pm
Full Name: Carlos Castellanos
Location: Astoria, NY
Contact:

Re: Veeam v9 Backup Performance Slow

Post by CCastellanos »

Tanner, perhaps more as post-mortem it might be worth checking your job settings based on these observations:
- You mentioned you had daily incremental, I may have missed what type of incremental, but in an scenario of REVERSE INCremental to a RAID 6 Array with 5400RMP 8TB drives your performance might be very limited from the get-go.
- Your bottleneck might not be the Appliance, controller or Network pipe event at 1Gbps but your spinning disks speeds and the dual parity overhead. In a REVERSE INC the read/writes can pound any system, more so depending on your change rate and size of backup file. This is perhaps the closest source from my view to describe the kind of wait times you are seeing in the job: the storage is simply busy.
- Something else is if your appliance does any sort of caching, that would help.
- Your PROXY seems to be more than capable to process whatever could be end up sent to REPO. But I did miss if your B&R/Proxy VM was also siting on the same Docker container as the REPO VM?
- When you move to v9 did you convert the job to per-VM backup file or kept as it was? This may change the load pattern to your REPO.
- Not sure if I caught this was very OK on v8?
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

-Daily incrementals are regular old forward incrementals from a weekly active full. I am aware that 5400RPM 8TB drives are not particularly performant. This is irrelevant since they had worked well enough before.
-See above
-Aside from the write caching on the disk, I would be very surprised if the array did any significant caching on its 2GB of ram. Also, this was working well enough before regardless.
-B&R is a VM on a vSphere cluster, the Repo WAS a docker container on a Synology array. I've taken the advice of support and installed a new repo, a 2.4Ghz 8 core 48GB Ubuntu server that accesses the array via NFS. See the included image for how that's working out.
-Nothing about the jobs was changed. Especially not the things I can't change since they're only available to enterprise customers.
-Yes, as I have said several times, everything worked well enough in v8.

As mentioned above, I installed a new linux repository and switched the backups over to use it. It is backed by the same storage, only now it is accessed through NFS by a 2.4Ghz 8 core 48GB ram Ubuntu server. This server does nothing but run as the Veeam repo. So, since talking to support, I have added a proxy (4 core 3.4ghz 16GB ram), and this thing, in addition to the original B&R server, basically more than tripling the compute resources of the Veeam infrastructure. Sadly, this has not at all had the intended effect:

Image

As you can see, the job started off well enough. It had some weird spike/trough pattern to the transfer, but it averaged out to 60+MB/s so I was ok with it. I even started a second job that seemed to be running ok to. Then I went home. Around midnight one of the jobs simply stopped transferring data. Around 1:30AM, so did the other one. Even a replication job stopped working. The storage device registers no activity, these jobs are simply hung. Also notice that Veeam is blaming the source this time, that's a new twist.

I'll be adding these logs and info to the ticket.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Veeam v9 Backup Performance Slow

Post by tsightler »

Was this a new active full or an incremental? I was looking at the logs on one of your larger servers that appeared to hang at ~1:30AM and everything is performing nicely on the source and target, but then I see this in the logs on the new Linux repository:

Code: Select all

[24.05.2016 01:37:05] <139794424174336> stg| WARN|FIB update has been going on more than '5' minutes, recorder '0x000x7f2488141700', FIB 'Backup of the FIB Exchange1_1-flat.vmdk'.
[24.05.2016 01:37:05] <139794608813824> alg| WARN|Timed out to wait for block, block index '178959' (Wait loop will be continued, timeout '1440' minutes ).
[24.05.2016 01:37:05] <139794466137856> alg| WARN|Timed out to wait for block, block index '178960' (Wait loop will be continued, timeout '1440' minutes ).
[24.05.2016 01:37:05] <139794382210816> alg| WARN|Timed out to wait for block, block index '178962' (Wait loop will be continued, timeout '1440' minutes ).
[24.05.2016 01:37:05] <139794055124736> alg| WARN|Timed out to wait for block, block index '178963' (Wait loop will be continued, timeout '1440' minutes ).
[24.05.2016 01:37:05] <139794046732032> alg| WARN|Timed out to wait for block, block index '178964' (Wait loop will be continued, timeout '1440' minutes ).
[24.05.2016 01:37:05] <139794440959744> alg| WARN|Timed out to wait for block, block index '178961' (Wait loop will be continued, timeout '1440' minutes ).
FIB update is a simple update operations to XML summary data stored within the backup file, it's hard for me to read this as anything other than a disk I/O issue. I'm trying to think of something else that would cause this and I'm still looking at the logs (and support may have a different opinion). Do you happen to have any other storage to which you could try running a backup to as a test? The logs will probably tell me, but was this a clean full backup on the new repo or did you map the existing backup chain to the new repo?
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Both jobs were active fulls, as both jobs had not had a chance to do that this weekend due to the issues. I did make sure to map them appropriately when I set up the new repo.

However, I don't believe this is relevant to the issue, as it seems that it was caused by an NFS failure. More specifically, the NFS module of the array crashed. Hopefully that was due to something I can control for and I can correct it without resorting to CIFS, but it seems that it wasn't a Veeam issue this time.

Well, unless you count the misreporting of the bottleneck. Why would it be source? Also, I can't yet determine why the replication job failed. I'd changed the storage it was set to use for metadata to something other than this repo.

Anyway, testing continues on the new repo.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Veeam v9 Backup Performance Slow

Post by tsightler » 1 person likes this post

Ah, that makes sense, hopefully you can get to the root cause of the NFS failures. I saw that the other job failed with the identical error.

It's not at all uncommon for source to be the bottleneck for a full backup, in most cases I would expect it to be so. Bottleneck is just a representation of which point in the chain Veeam is spending the most time waiting on measure at 4 points, source disk read, proxy processing (mostly compression), transfer of data from the proxy to the repository (network), and target disk write speed. Since the data written to the target is compressed, it has half as much data to deal with, the network isn't the bottleneck unless your saturating it, and the proxy isn't like to be the bottleneck unless it's using 100% of the CPU, so source, which is the device transferring the most data out of all of that, is almost certainly going to be the bottleneck.

I'll look at the replication log.

BTW, I tried to look at your initial logs that you uploaded but they didn't include enough information. The logs from the Docker repository cut off before it got to the error, I'm not sure why but I wondered if it had to do with the fact that the storage device seemed to be keeping time in a different timezone. It didn't even look like UTC because it was off by too many hours.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Your explanation is how I would expect it to work, but in practice it never seems to be correct.

For instance here it was quite obviously the Target that was responsible, as all data was getting to the repository, it just was never getting written to disk. Instead it reports Source. In the original issue, it was reporting Network for a similar issue (write latency at the target). Currently it reports Target, which I can believe.

I had to abandon the new Linux repository and go to CIFS. There were too many issues with NFS. I'm willing to believe that is a result of the synology implementation for now. Historically I've been very reluctant to use CIFS with this setup as when we originally set it up CIFS drastically underperformed. So far it seems tolerable. If it can chug along without dying horribly the moment a second job starts, or trailing off into fairy land with the throughput, it might be the end of my difficulties.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Veeam v9 Backup Performance Slow

Post by tsightler » 1 person likes this post

cbc-tgschultz wrote:Your explanation is how I would expect it to work, but in practice it never seems to be correct.
Bottleneck statistic take into account stats during normal operation, it's not going to update once the repository stopped working because data was no longer being transferred. That's not a bottleneck, that's a failure.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

If that's the case, then why doesn't the job fail instead of sitting there indefinitely not transferring data?
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Veeam v9 Backup Performance Slow

Post by tsightler » 1 person likes this post

cbc-tgschultz wrote:If that's the case, then why doesn't the job fail instead of sitting there indefinitely not transferring data?
I'm sure the job would eventually fail. You can see in the logs the agents continuing to retry every 30 minutes, but I'm not sure how many times it would retry before finally just giving up. Regardless, when no data is transferring, bottleneck statistics are not being updated.

In your earlier issue, when you were seeing the slow performance going to the Docker repo, bottleneck statistics were still being updated because data transfer was still happening, it was just very slow, which is why I was expecting the possibility of memory starvation.
cbc-tgschultz
Enthusiast
Posts: 65
Liked: 11 times
Joined: May 13, 2016 1:48 pm
Full Name: Tanner Schultz
Contact:

Re: Veeam v9 Backup Performance Slow

Post by cbc-tgschultz »

Which, after a night of testing, does seem to have been the case. Things ran exactly as expected with the repository configured for CIFS instead of Linux (either via Docker or external server). Considering that it ran fine with the Docker container prior to v9, I expect changes were made that caused it to behave differently than it did before with the limited RAM amount.

Unfortunately for me, the array continues to have issues with disk failure, causing a long running job to fail last night due to "Shared memory connection was closed" at the same time a redundant disk failed. At least I hope that's what caused that error. I have no idea why such a thing should cause a problem with the CIFS connection, but I'm willing to blame Synology for that one.

It does make me wish Backup jobs had the ability to resume where they left off so I wouldn't lose 8 hours of transfer though.

Anyway, this thing is going to be down at least a week while we get new disks and repair the now zero-redundancy RAID array, so I won't be able to confirm this is a long term solution for some time, but I'm optimistic given the results so far.
Post Reply

Who is online

Users browsing this forum: Google [Bot] and 278 guests