Comprehensive data protection for all workloads
Post Reply
d.lansinklesscher
Service Provider
Posts: 42
Liked: 6 times
Joined: Aug 29, 2014 12:53 pm
Full Name: Dennis Lansink
Location: Hengelo, Netherlands
Contact:

Problems using Lan(512) and wan(256) dedup settings on Nas

Post by d.lansinklesscher »

Hi Guys,

I'm backing up a small vsphere 5.5 envirement to a Qnap nas with the Linux datamover installed. I was first using the Local(1024) block size setting for the job, and it ran fine.

After switching to a block size setting of Wan(256) to optimize it as source for a copy-job over a slower connection, and running an active full after that the backup job started failing. The errors I started receiving are:
-Exception of type 'Veeam.Backup.AgentProvider.AgentClosedException' was thrown.
-An existing connection was forcibly closed by the remote hostFailed to upload disk.Agent failed to process method {DataTransfer.SyncDisk}.

Support recommended me to create a new backup job, but that doesn’t solve the problem. I also tried the Lan(512) setting but that gives me the same problem.
If I switch back the block size setting back to Local(1024) the job run’s fine.

Anyone else seeing this behavior?

I'm running 8.0.0.817 (Nov 6, 2014)
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Problems using Lan(512) and wan(256) dedup settings on N

Post by veremin »

What was support team's answer, when you said that the provided workaround (recreating job) hadn't solved the issue? Do you happen to have some other repository to which you can point a job temporarily in order to exclude (or not) the said device from the list of potential suspects? Thanks.
d.lansinklesscher
Service Provider
Posts: 42
Liked: 6 times
Joined: Aug 29, 2014 12:53 pm
Full Name: Dennis Lansink
Location: Hengelo, Netherlands
Contact:

Re: Problems using Lan(512) and wan(256) dedup settings on N

Post by d.lansinklesscher »

They said it worked fine in their lab, and asked me to supply them with more logs which I did. I'll create an SMB repository to see if the problem also occurs there.
veremin
Product Manager
Posts: 20284
Liked: 2258 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Problems using Lan(512) and wan(256) dedup settings on N

Post by veremin »

Keep working with the support team on addressing this issue, and let us know about the results of your findings. Thanks.
d.lansinklesscher
Service Provider
Posts: 42
Liked: 6 times
Joined: Aug 29, 2014 12:53 pm
Full Name: Dennis Lansink
Location: Hengelo, Netherlands
Contact:

Re: Problems using Lan(512) and wan(256) dedup settings on N

Post by d.lansinklesscher » 3 people like this post

Support found the root cuase for the issue: Lack of memory in the NAS.

The nas has only 1GB of ram, and apparently this was enough to run the Job with a block size of 1024. But as you lower the blocksize you increase the load on the transport-service.

For this nas with only 1GB lowering the blocksize to 512 made it run out of memory and crash the transportservice.

This explanation makes perfect sense to me. Something to keep in mind for future implementations.
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Problems using Lan(512) and wan(256) dedup settings on N

Post by chrisdearden »

there is an upper limit that the hash table can be for a given replica job - I think its something like 2GB/Job - the smaller the block size , the more hashes/GB it would generate.
Post Reply

Who is online

Users browsing this forum: Google [Bot], Ivan239, wsmery and 155 guests