-
- Service Provider
- Posts: 880
- Liked: 164 times
- Joined: Aug 26, 2013 7:46 am
- Full Name: Bastiaan van Haastrecht
- Location: The Netherlands
- Contact:
StoreOnce best practices
Hello,
We are about to implement an StoreOnce 5100 system, and I'm reading thru a lot of forum posts, blogs ans HP white papers trying to scope the optimum setup. But none of the resources gives an complete overview of job settings, nor are they v9 compliant. So I have summarized my findings, and would like to ask the community to shoot on them. Goal of the settings are best transfer speed and dedupe ratio.
The ones made bold are the ones I would like feadback on.
Repository settings:
- Add as Deduplicated storage, as catalyst repository (with catalyst license you have source side deduping, when you dont got an catalyst license you add it as CIFS or NFS share, then dedupe is done at the target)
- Align backup file data block: not selected (as StoreOnce uses variable block size)
- Decompress before storing: not selected (decompress uses a lot of CPU, and as the backup files will look 95% as the previos you can leave compression on)
Backup or copy job settings:
- Backup: Forward incremental (avoid daily synthetic with reversed incremental)
- Backup: Weekly Synthetic full (You can do monthly active full, but logics on Veeam and StoreOnce have improved to run synthetics)
- Copy: Use GFS to build an retention, avoid high restore point chain (Don't up the restorepoint chain to much, use GFS instead)
- Backup: Health checks: disabled (only do them when you have room in your backup window. They are very intensive on a dedupe appliance)
- Enable inline data deduplication: disabled? (can't find any good info in to disable or not)
- Compression level: Optimal (leave it on, as when you decompress it costs a lot of CPU power)
- Storage optimization: LAN target
- Encryption: disabled (If you enable this deduplication will be less efficient)
Edit: this is the only official document I've found: http://cloud-land.com/wp-content/upload ... -Veeam.pdf
Thanks in advance !
Bastiaan
We are about to implement an StoreOnce 5100 system, and I'm reading thru a lot of forum posts, blogs ans HP white papers trying to scope the optimum setup. But none of the resources gives an complete overview of job settings, nor are they v9 compliant. So I have summarized my findings, and would like to ask the community to shoot on them. Goal of the settings are best transfer speed and dedupe ratio.
The ones made bold are the ones I would like feadback on.
Repository settings:
- Add as Deduplicated storage, as catalyst repository (with catalyst license you have source side deduping, when you dont got an catalyst license you add it as CIFS or NFS share, then dedupe is done at the target)
- Align backup file data block: not selected (as StoreOnce uses variable block size)
- Decompress before storing: not selected (decompress uses a lot of CPU, and as the backup files will look 95% as the previos you can leave compression on)
Backup or copy job settings:
- Backup: Forward incremental (avoid daily synthetic with reversed incremental)
- Backup: Weekly Synthetic full (You can do monthly active full, but logics on Veeam and StoreOnce have improved to run synthetics)
- Copy: Use GFS to build an retention, avoid high restore point chain (Don't up the restorepoint chain to much, use GFS instead)
- Backup: Health checks: disabled (only do them when you have room in your backup window. They are very intensive on a dedupe appliance)
- Enable inline data deduplication: disabled? (can't find any good info in to disable or not)
- Compression level: Optimal (leave it on, as when you decompress it costs a lot of CPU power)
- Storage optimization: LAN target
- Encryption: disabled (If you enable this deduplication will be less efficient)
Edit: this is the only official document I've found: http://cloud-land.com/wp-content/upload ... -Veeam.pdf
Thanks in advance !
Bastiaan
======================================================
Veeam ProPartner, Service Provider and a proud Veeam Legend
Veeam ProPartner, Service Provider and a proud Veeam Legend
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: StoreOnce best practices
Decompress should be enabled on the StoreOnce repository (and it is enabled automatically by default). It is recommended to send raw data to the deduplicating appliance, otherwise its deduplication capabilities will be impacted.b.vanhaastrecht wrote:- Decompress before storing: not selected (decompress uses a lot of CPU, and as the backup files will look 95% as the previos you can leave compression on)
In fact the length of the chain is limited on StoreOnce by 7 restore points (both for regular backup and backup copy jobs).b.vanhaastrecht wrote:- Copy: Use GFS to build an retention, avoid high restore point chain (Don't up the restorepoint chain to much, use GFS instead)
Disabled.b.vanhaastrecht wrote:- Enable inline data deduplication: disabled? (can't find any good info in to disable or not)
Local target (16TB+...)b.vanhaastrecht wrote:- Storage optimization: LAN target
-
- Veeam ProPartner
- Posts: 28
- Liked: 7 times
- Joined: Mar 16, 2011 8:36 am
- Full Name: Leonhard Kurz
- Contact:
[MERGED] Best Practices for HP Catalyst
Hi,
I have questions about using a HP StoreOnce attached wich FC using Catalyst protocoll with v9. The device shall be used as secondary repository for backup copy jobs. The manual states, that if you use the StoreOnce / Catalyst with a standard backup job, you should use 4k blocksize. With normal backup copy jobs, the blocksize of the original backup job and the backup copy job must be the same. Does this mean, I have to configure 4k blocksize for my primary backup jobs? And for the advanced job settings, do I have to enable/disable deduplication? I guess, compression should be disabled. I know, when you configure the StoreOnce right, deduplication is used between the proxy with the StoreOnce Agent and the StoreOnce Device. I still wonder if the dedup setting in the job is generally ignored or not in this case. Same with compression. When you configure the Repo for a Catalyst device, the Repo is set to decompress. But what happens with the compression setting of the job. Ignored? Or, if enabled, is the proxy compressing and the StreoOnce Agent decomressing (which would be in sane ).
Anyone any answers?
Thanx
__Leo
I have questions about using a HP StoreOnce attached wich FC using Catalyst protocoll with v9. The device shall be used as secondary repository for backup copy jobs. The manual states, that if you use the StoreOnce / Catalyst with a standard backup job, you should use 4k blocksize. With normal backup copy jobs, the blocksize of the original backup job and the backup copy job must be the same. Does this mean, I have to configure 4k blocksize for my primary backup jobs? And for the advanced job settings, do I have to enable/disable deduplication? I guess, compression should be disabled. I know, when you configure the StoreOnce right, deduplication is used between the proxy with the StoreOnce Agent and the StoreOnce Device. I still wonder if the dedup setting in the job is generally ignored or not in this case. Same with compression. When you configure the Repo for a Catalyst device, the Repo is set to decompress. But what happens with the compression setting of the job. Ignored? Or, if enabled, is the proxy compressing and the StreoOnce Agent decomressing (which would be in sane ).
Anyone any answers?
Thanx
__Leo
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: [MERGED] Best Practices for HP Catalyst
Yes, though a small correction: the option you're talking about uses data block size of 4096KB (4MB).LeoKurz wrote:Does this mean, I have to configure 4k blocksize for my primary backup jobs?
For the job targeted to deduplicating appliance (backup copy): inline deduplication disabled, compression level optimal (the repository should be set to decompress data prior to writing it to repository, anyway).LeoKurz wrote:And for the advanced job settings, do I have to enable/disable deduplication? I guess, compression should be disabled.
Not ignored, if enabled. It is another process, not related to storage deduplication and not seriously affecting it, since uses much larger block size.LeoKurz wrote:I know, when you configure the StoreOnce right, deduplication is used between the proxy with the StoreOnce Agent and the StoreOnce Device. I still wonder if the dedup setting in the job is generally ignored or not in this case.
Compression applies to the data sent between the source repository (we are still talking about backup copy job here) and data mover running on the target repository gateway server ("the StoreOnce Agent and the StoreOnce Device"). There the data is decompressed prior to be written to the storage.LeoKurz wrote:Same with compression. When you configure the Repo for a Catalyst device, the Repo is set to decompress. But what happens with the compression setting of the job. Ignored? Or, if enabled, is the proxy compressing and the StreoOnce Agent decomressing (which would be in sane ).
-
- Veeam ProPartner
- Posts: 28
- Liked: 7 times
- Joined: Mar 16, 2011 8:36 am
- Full Name: Leonhard Kurz
- Contact:
Re: StoreOnce best practices
Exactly what I was looking for, thank you! (Blocksize: who cares about factor 1024... ), for all others looking for this informatgion, this is "LAN Target".
Cheers
__Leo
Cheers
__Leo
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: StoreOnce best practices
"Local target (16TB+)" actually.
-
- Veeam ProPartner
- Posts: 28
- Liked: 7 times
- Joined: Mar 16, 2011 8:36 am
- Full Name: Leonhard Kurz
- Contact:
Re: StoreOnce best practices
Of curse, sorry. Friday late in the afternoorn...
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jun 07, 2016 1:24 pm
- Full Name: Kent Hutch
- Contact:
Re: StoreOnce best practices
Based on the fact that each chain is limited to 7 recovery points with Storeonce + Catalyst it is not a solution to use Storeonce for long term archiving, correct? I will have a server with local faster disk then a storeonce in shared folder mode. Can I ask what the best practices are to configure copy jobs to this then?
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: StoreOnce best practices
Not entirely correct, since this limitation tells about the length of the backup chain - chains that contain one full backup and a set of subsequent incremental backups cannot be greater than 7 restore points on HPE StoreOnce + Catalyst. That is due to the low limit for simultaneously open files on StoreOnce and the fact that it does not support SHARED READ, while some restore operations require opening the entire backup chain. This limitation doesn't take effect if you're storing multiple fulls, which is the case for long term archiving.Hutch46 wrote:Based on the fact that each chain is limited to 7 recovery points with Storeonce + Catalyst it is not a solution to use Storeonce for long term archiving, correct?
Here's another thread discussing similar concerns.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jun 07, 2016 1:24 pm
- Full Name: Kent Hutch
- Contact:
Re: StoreOnce best practices
Cheers Thanks Mate
Who is online
Users browsing this forum: No registered users and 61 guests