Comprehensive data protection for all workloads
Post Reply
ita-drift@ucl.dk
Lurker
Posts: 2
Liked: never
Joined: Dec 18, 2018 10:32 am
Contact:

Start using HP StoreOnce as Scale-out Repository.

Post by ita-drift@ucl.dk »

Today we use HP 3PAR for datastores for VMs in our vCenter and for Veeam Backup Datastores receiving backups from those VMs. There are few Proxies on those datastores too. In general backups are incremental with one full backup per week. Each Backup job has a Secondary Target configured which is a BackupCopy job with the HP StoreOnce Catalyst 4700 as Target. There are defined four Stores on the StoreOnce.

We would like to see whether we could make better use of that StoreOnce using it as a Scale-out Repository.

How should I proceed?

Best Regards
Peter Rasch Lageri
UCL
Denmark.
foggy
Veeam Software
Posts: 21071
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Start using HP StoreOnce as Scale-out Repository.

Post by foggy »

Hi Peter, yes, you can create a scale-out repository from StoreOnce Catalyst Stores. To increase the data deduplication ratio and to allow for virtual synthetics, you would need to configure it to use Data locality policy.
ita-drift@ucl.dk
Lurker
Posts: 2
Liked: never
Joined: Dec 18, 2018 10:32 am
Contact:

Re: Start using HP StoreOnce as Scale-out Repository.

Post by ita-drift@ucl.dk »

The StoreOnce has defined stores on it used for BackupCopy job. Do I just defined a new store and let that be used for ScaleOut or do I need to do something with the four stores before I can use StoreOnce as a ScaleOut?
foggy
Veeam Software
Posts: 21071
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Start using HP StoreOnce as Scale-out Repository.

Post by foggy »

You just create a new scale-out repository and add existing repositories as extents. Veeam B&R will suggest you to re-point existing jobs to the new scale-out repo.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 168 guests