Comprehensive data protection for all workloads
Post Reply
toreml
Expert
Posts: 103
Liked: 3 times
Joined: Sep 13, 2017 11:12 am
Full Name: Tore Mejlænder-Larsen
Contact:

Veeam 9,5 and Cohesity

Post by toreml »

Hi.
We have now set up Cohesity are backup target - but we experience only 15 MB/s backup speed.
As far as we have been able to read - the jobs have been set up according to documentation from Cohesity.

Anyone here who has experience with using Cohestiy as a backup target - as shared folder?

Any tips that could help us to get a better backup speed is appreciated.


Regard
Tore ML
DGrinev
Veteran
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Veeam 9,5 and Cohesity

Post by DGrinev »

Hi Tore,

Please can you share more details about the infrastructure and what was the bottleneck during a backup session?
That should help us to determine which adjustments you can make to get better backup speed from Veeam perspective. Thanks!
toreml
Expert
Posts: 103
Liked: 3 times
Joined: Sep 13, 2017 11:12 am
Full Name: Tore Mejlænder-Larsen
Contact:

Re: Veeam 9,5 and Cohesity

Post by toreml »

We have virtual servers - except some repository servers with local disk which are physical
When we run backup to them we see usuall 120+ MB\s

Cohesity and Veeam server are on the same vlan - some proxy servers have 10 GB net - while other have 1 GB - but we see no difference between the two.

We have had a remote session with Cohesity - but they can't see why we have this performance issue.
The say it looks like that Veeam can't deliver enough data for the Cohesity storrage.

We changed to Local disk 16TB- backuo size - and the the backup speed increased from 15 MB/s -> 28 MB/s

Veeam report that it's targe that is the bottleneck


Tore
DGrinev
Veteran
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Veeam 9,5 and Cohesity

Post by DGrinev »

Tore,

The bottleneck target means that the target disk writer component spent most of its time performing I/O to backup files, as it states in the bottleneck analysis section of the sticky thread.
Do you have a Gateway server on the 10 Gb network for VM data transferring between the shared folder repository and proxy server?
Please keep in mind if you have no dedicated VM for Gateway server in the backup infrastructure, the backup server will be automatically selected to act as data mover for CIFS Repository. Thanks!
jah2323
Novice
Posts: 6
Liked: never
Joined: Sep 27, 2012 7:22 pm
Full Name: Jason A Hassett
Contact:

Re: Veeam 9,5 and Cohesity

Post by jah2323 »

Brand new Cohesity's same terrible "target" bottleneck performance issue on 10G with nexus switches. my data domains perform like 30x better.

I'm going to look into the data gateway i guess, though i guess i'm unsure as not having had a "cifs" target before.
cbrasga
Influencer
Posts: 16
Liked: 2 times
Joined: Apr 27, 2013 2:09 am
Full Name: Cazi Brasga
Contact:

Re: Veeam 9,5 and Cohesity

Post by cbrasga »

Any update on this? Looking at replacing our backup storage and Cohesity is one of the top repository options we’re looking at. Want to make sure performance issue has been resolved.
JMP
Lurker
Posts: 2
Liked: 1 time
Joined: Oct 19, 2015 10:37 am
Contact:

Re: Veeam 9,5 and Cohesity

Post by JMP » 1 person likes this post

We were also seeing backup speed of 20-30MB/s with SMB/CIFS on 10Gbit network. We then switched to NFS and performance is now 50-110MB/s per stream. Total speed is around 700MB/s when we have all data reduction technologies on (Inline Erasure Coding, Inline Deduplication and Inline Compression on). Veeam is reporting that bottleneck is network so I guess we could get even better speeds with some tuning. My advise is to use NFS.
nitramd
Veteran
Posts: 297
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: Veeam 9,5 and Cohesity

Post by nitramd »

@toreml, what is your QoS policy set to? You could try using SSD to see if you get a performance increase; I don't think this will make much of an impact but maybe it's worth trying.
jah2323
Novice
Posts: 6
Liked: never
Joined: Sep 27, 2012 7:22 pm
Full Name: Jason A Hassett
Contact:

Re: Veeam 9,5 and Cohesity

Post by jah2323 »

Probably the biggest obstacle was antivirus on the gateway server. I completely rebuilt our Veeam environment, and that still didn't speed things up. removing the antivirus (symantec in our case) entirely from the gateway server (we were using the basic light server version of SEP), sped things up considerably. Synthetic fulls were still too long (though much faster). i am currently using incrementals, and then running an active full on the weekend. for our 3tb image server, the active full takes between 8 and 12 hours. I am using cohesity now to send the snapshots over to our DR cohesity cluster.

Also, the cohesity support engineer and myself could never get the NFS option to work. even after making a persistent connection through a linux server created specifically for that purpose.

I am currently running a full vm restore (changed blocks only) for our symantec server for a different reason (tried moving the database, it never completed, or wouldn't move on... and basically freaked out and wont recognize the new OR the old database). speeds had me worried as it took over 15 minutes to get going . looks to be working fine now.

Image

Though, not sure why it was unable to use "hot add":

5/30/2018 9:48:45 AM Preparing for virtual disks restore
5/30/2018 9:48:46 AM Using proxy INDY-VeeamPRX01.beaconcu.org for restoring disk Hard disk 1
5/30/2018 9:48:46 AM Using proxy INDY-VeeamPRX01.beaconcu.org for restoring disk Hard disk 2
5/30/2018 10:03:24 AM Unable to hot add target disk, failing over to network mode...
5/30/2018 10:21:04 AM Rolling back Hard disk 1 (200.0 GB) : 5.1 GB restored at 5 MB/s [hotadd]
5/30/2018 10:07:00 AM Unable to hot add target disk, failing over to network mode...
5/30/2018 10:07:29 AM Rolling back Hard disk 2 (200.0 GB) : 132.0 MB restored at 7 MB/s [hotadd]
jah2323
Novice
Posts: 6
Liked: never
Joined: Sep 27, 2012 7:22 pm
Full Name: Jason A Hassett
Contact:

Re: Veeam 9,5 and Cohesity

Post by jah2323 »

hopefully i can figure out how to tweak the restore. or what the issue is with my proxy not wanting to hot add the disks.. i was told that this cohesity cluster would handle restores MUCH faster and better than our old Data Domain 2500s... seems to be about the same speed, or slower without the hot add option.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam 9,5 and Cohesity

Post by Gostev » 1 person likes this post

Both Data Domain and Cohesity are deduplicating storage devices, so it's hard to expect great performance from either. This is why our universal recommendation is to use regular servers with a bunch of hard drives as backup targets. You will be blown away by backup and restore performance (and also by how much cheaper they are). It always an eye-opener for all those Veeam users who eventually give up on trying to have their backups go direct to dedupe storage, and decides to give regular server a try.
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Re: Veeam 9,5 and Cohesity

Post by jtupeck »

Looks like this thread hasn't been added to in quite some time, but it is VERY similar, if not the same as what I experienced in a customer environment yesterday. An interesting note to add:

Running a backup direct to a single Cohesity node repository without the SOBR wrapper around it shows performance in the 140MB/s throughput. As soon as we put the SOBR construct in place, throughput drops to 20-30MB/s. Cohesity's documentation (links below if they can be accessed; may need an account) suggests using the 'Performance' placement policy in the SOBR setup, which I have honestly never used before. I am planning on having the customer try 'Data Locality' to see if this setting is the cause of the performance degradation and will report back when I know more. I wanted to hopefully reopen this discussion and see if there is any knowledge on Veeam's end around what happens in the SOBR construction that would cause a performance issue such as this.


https://www.cohesity.com/resource-asset ... ory-en.pdf
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Re: Veeam 9,5 and Cohesity

Post by jtupeck »

Also, found this secondary documentation from Cohesity showing that 'Data Locality' should be used for the SOBR setup. Could there be something in that switch that is causing the performance degradation? I am planning on setting up a SOBR with a single Cohesity extent, then testing it with both placement policy settings to see what happens. If there is no difference, then I will have them file a Support case and see what we can figure out from there.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam 9,5 and Cohesity

Post by Gostev » 1 person likes this post

This option has no impact on the actual data movement. It has one job: to control SOBR extent selection by the scheduler. Specifically, it allows to control whether incremental backup files can be put on any SOBR extent, or they should always be placed with the associated full backup file. But once the target extent is selected for the processed machine, this option has no further play.
jtupeck
Enthusiast
Posts: 76
Liked: 22 times
Joined: Aug 27, 2013 3:44 pm
Full Name: Jason Tupeck
Contact:

Re: Veeam 9,5 and Cohesity

Post by jtupeck »

Thank you, @Gostev - that's kind of what I thought. Very strange that the SOBR wrapper seems to drop the performance. I am having them test a SOBR construct with a single Cohesity node extent to see if that makes any sort of difference. Will update as I know more. Thanks again!

Also, here is the link that somehow did not make it into my previous reply, in case it helps anyone else: https://support.cohesity.com/s/article/ ... ntegration
Post Reply

Who is online

Users browsing this forum: Amazon [Bot] and 221 guests