Comprehensive data protection for all workloads
Post Reply
b.vanhaastrecht
Service Provider
Posts: 833
Liked: 154 times
Joined: Aug 26, 2013 7:46 am
Full Name: Bastiaan van Haastrecht
Location: The Netherlands
Contact:

StoreOnce best practices

Post by b.vanhaastrecht »

Hello,

We are about to implement an StoreOnce 5100 system, and I'm reading thru a lot of forum posts, blogs ans HP white papers trying to scope the optimum setup. But none of the resources gives an complete overview of job settings, nor are they v9 compliant. So I have summarized my findings, and would like to ask the community to shoot on them. Goal of the settings are best transfer speed and dedupe ratio.

The ones made bold are the ones I would like feadback on.

Repository settings:
- Add as Deduplicated storage, as catalyst repository (with catalyst license you have source side deduping, when you dont got an catalyst license you add it as CIFS or NFS share, then dedupe is done at the target)
- Align backup file data block: not selected (as StoreOnce uses variable block size)
- Decompress before storing: not selected (decompress uses a lot of CPU, and as the backup files will look 95% as the previos you can leave compression on)

Backup or copy job settings:
- Backup: Forward incremental (avoid daily synthetic with reversed incremental)
- Backup: Weekly Synthetic full (You can do monthly active full, but logics on Veeam and StoreOnce have improved to run synthetics)
- Copy: Use GFS to build an retention, avoid high restore point chain (Don't up the restorepoint chain to much, use GFS instead)
- Backup: Health checks: disabled (only do them when you have room in your backup window. They are very intensive on a dedupe appliance)
- Enable inline data deduplication: disabled? (can't find any good info in to disable or not)
- Compression level: Optimal (leave it on, as when you decompress it costs a lot of CPU power)
- Storage optimization: LAN target
- Encryption: disabled (If you enable this deduplication will be less efficient)

Edit: this is the only official document I've found: http://cloud-land.com/wp-content/upload ... -Veeam.pdf

Thanks in advance !
Bastiaan
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: StoreOnce best practices

Post by foggy »

b.vanhaastrecht wrote:- Decompress before storing: not selected (decompress uses a lot of CPU, and as the backup files will look 95% as the previos you can leave compression on)
Decompress should be enabled on the StoreOnce repository (and it is enabled automatically by default). It is recommended to send raw data to the deduplicating appliance, otherwise its deduplication capabilities will be impacted.
b.vanhaastrecht wrote:- Copy: Use GFS to build an retention, avoid high restore point chain (Don't up the restorepoint chain to much, use GFS instead)
In fact the length of the chain is limited on StoreOnce by 7 restore points (both for regular backup and backup copy jobs).
b.vanhaastrecht wrote:- Enable inline data deduplication: disabled? (can't find any good info in to disable or not)
Disabled.
b.vanhaastrecht wrote:- Storage optimization: LAN target
Local target (16TB+...)
LeoKurz
Veeam ProPartner
Posts: 28
Liked: 7 times
Joined: Mar 16, 2011 8:36 am
Full Name: Leonhard Kurz
Contact:

[MERGED] Best Practices for HP Catalyst

Post by LeoKurz »

Hi,

I have questions about using a HP StoreOnce attached wich FC using Catalyst protocoll with v9. The device shall be used as secondary repository for backup copy jobs. The manual states, that if you use the StoreOnce / Catalyst with a standard backup job, you should use 4k blocksize. With normal backup copy jobs, the blocksize of the original backup job and the backup copy job must be the same. Does this mean, I have to configure 4k blocksize for my primary backup jobs? And for the advanced job settings, do I have to enable/disable deduplication? I guess, compression should be disabled. I know, when you configure the StoreOnce right, deduplication is used between the proxy with the StoreOnce Agent and the StoreOnce Device. I still wonder if the dedup setting in the job is generally ignored or not in this case. Same with compression. When you configure the Repo for a Catalyst device, the Repo is set to decompress. But what happens with the compression setting of the job. Ignored? Or, if enabled, is the proxy compressing and the StreoOnce Agent decomressing (which would be in sane :-) ).

Anyone any answers?

Thanx
__Leo
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: [MERGED] Best Practices for HP Catalyst

Post by foggy »

LeoKurz wrote:Does this mean, I have to configure 4k blocksize for my primary backup jobs?
Yes, though a small correction: the option you're talking about uses data block size of 4096KB (4MB).
LeoKurz wrote:And for the advanced job settings, do I have to enable/disable deduplication? I guess, compression should be disabled.
For the job targeted to deduplicating appliance (backup copy): inline deduplication disabled, compression level optimal (the repository should be set to decompress data prior to writing it to repository, anyway).
LeoKurz wrote:I know, when you configure the StoreOnce right, deduplication is used between the proxy with the StoreOnce Agent and the StoreOnce Device. I still wonder if the dedup setting in the job is generally ignored or not in this case.
Not ignored, if enabled. It is another process, not related to storage deduplication and not seriously affecting it, since uses much larger block size.
LeoKurz wrote:Same with compression. When you configure the Repo for a Catalyst device, the Repo is set to decompress. But what happens with the compression setting of the job. Ignored? Or, if enabled, is the proxy compressing and the StreoOnce Agent decomressing (which would be in sane :-) ).
Compression applies to the data sent between the source repository (we are still talking about backup copy job here) and data mover running on the target repository gateway server ("the StoreOnce Agent and the StoreOnce Device"). There the data is decompressed prior to be written to the storage.
LeoKurz
Veeam ProPartner
Posts: 28
Liked: 7 times
Joined: Mar 16, 2011 8:36 am
Full Name: Leonhard Kurz
Contact:

Re: StoreOnce best practices

Post by LeoKurz »

Exactly what I was looking for, thank you! (Blocksize: who cares about factor 1024... :oops: ), for all others looking for this informatgion, this is "LAN Target".

Cheers
__Leo
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: StoreOnce best practices

Post by foggy »

"Local target (16TB+)" actually.
LeoKurz
Veeam ProPartner
Posts: 28
Liked: 7 times
Joined: Mar 16, 2011 8:36 am
Full Name: Leonhard Kurz
Contact:

Re: StoreOnce best practices

Post by LeoKurz »

Of curse, sorry. Friday late in the afternoorn...
Hutch46
Lurker
Posts: 2
Liked: never
Joined: Jun 07, 2016 1:24 pm
Full Name: Kent Hutch
Contact:

Re: StoreOnce best practices

Post by Hutch46 »

Based on the fact that each chain is limited to 7 recovery points with Storeonce + Catalyst it is not a solution to use Storeonce for long term archiving, correct? I will have a server with local faster disk then a storeonce in shared folder mode. Can I ask what the best practices are to configure copy jobs to this then?
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: StoreOnce best practices

Post by foggy »

Hutch46 wrote:Based on the fact that each chain is limited to 7 recovery points with Storeonce + Catalyst it is not a solution to use Storeonce for long term archiving, correct?
Not entirely correct, since this limitation tells about the length of the backup chain - chains that contain one full backup and a set of subsequent incremental backups cannot be greater than 7 restore points on HPE StoreOnce + Catalyst. That is due to the low limit for simultaneously open files on StoreOnce and the fact that it does not support SHARED READ, while some restore operations require opening the entire backup chain. This limitation doesn't take effect if you're storing multiple fulls, which is the case for long term archiving.

Here's another thread discussing similar concerns.
Hutch46
Lurker
Posts: 2
Liked: never
Joined: Jun 07, 2016 1:24 pm
Full Name: Kent Hutch
Contact:

Re: StoreOnce best practices

Post by Hutch46 »

Cheers Thanks Mate
Post Reply

Who is online

Users browsing this forum: Google [Bot], Semrush [Bot] and 232 guests