PowerShell script exchange
Post Reply
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Veeam+Storeonce

Post by Hirosh »

Hi guys,

im planning to create multiple repository inside my catalyst store on my Storeonce. In the Hpe referrence configuration guide it has bolted that we must use the veeam powershell to do so,in the example command instead of the storeonce IP it has used FQDN.i wa wondering if using FQDN is mandatory or we can stick to the IP instead?

P.S: is it necessary to create FQDN for the SToreonce if we are using it COFC only?

regards,
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor »

If you access the StoreOnce via FC then you don't add the device via IP/FQDN. Instead you use the FC identifier.
Why do you actually want to add multiple repositories for the same Catalyst Store?
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

Hi Regnor,

im planning to create multiple repository inside a single Catalyst store , through Veeam powershell as recommended :

Add-VBRBackupRepository -Name <repository_name> -Folder storeonce://<storeonce_fqdn>:<objectstore_name>@/<subfolder_name> -Type HPStoreOnceIntegration -StoreOnceServerName <storeonce_fqdn> -UserName <user> -Password <password>

as you can see in the command it asks for FQDN, even this link which is from Veeam KB(https://www.veeam.com/kb2987) website has dissected the command and it specifies using FQDN. In order for Storeonce to have a FQDN, it needs to be joined to domain, which we have no use for that and we have not done it(joining to Domain) since we are using Storeonce only as COFC. i wanted to know can we use IP address instead of FQDN in the above command?


regards
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor »

I do understand what you're trying to achieve, I just would like to know why. What is the advantage of having multiple repositories pointing to the same Catalyst store?

Anyways, instead of <storeonce_fqdn> you type in the FC identifier which starts with "COFC-".
For FC-connected stores (Catalyst over Fiber Channel), replace the StoreOnce name with the "COFC" identifier for FC.
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

Hi regnor,

in the following best practice published by HPE https://www.hpe.com/psnow/doc/a00023056enw the benefits are:

"Better deduplication—Each Catalyst Store is an independent deduplication domain. To enable cross-deduplication among multiple
backup repositories, it is possible to create them inside the same Catalyst Store. This is useful when we backup similar data to different
backup repositories.

• Catalyst Copy granularity—As described in the Veeam-managed HPE StoreOnce Catalyst Copy job section, the Veeam Catalyst Copy job
copies the contents of an entire backup repository to other HPE StoreOnce appliances. When multiple jobs write to the same backup
repository, the Catalyst Copy job will replicate the backup data of all backup jobs. There are situations where it is necessary to tailor the
replication parameters to the systems protected by specific jobs. This configuration requires multiple backup repositories—potentially one
per job—and it can be useful to have them in the same Catalyst Store to get better deduplication.

• Migration—Veeam provides an easy methodology for migrating entire backup repositories to new storage platforms. (See the Migrating
Veeam backup repositories to/from an HPE StoreOnce Catalyst Store section for details.) If a storage platform becomes full, and you want
to migrate a subset to new storage, then a solution design based on multiple backup repositories offers more flexibility than a solution
based on a single large backup repository.


• Manual workload balancing—Starting with Veeam Backup & Replication Version 12, manual workload balancing is generally not
necessary. Veeam Backup & Replication version 12 has an effective dynamic load balancing mechanism that can use multiple gateways to
concurrently access the same backup repository. Veeam Backup & Replication will try to use the gateway service running on the same
server running the proxy service. This optimization is useful to better distribute the workload for VMs inside the same job, assigning them
to different proxies and gateways, and at the same time, to avoid an extra hop in LAN that would occur when the proxy and gateway are
on different servers.If a manual workload balancing is required, it is possible to configure a static balancing, defining which proxy will run a job and which gateway will manage the backup repository. In this situation, it is important to assign the proxy and gateway services to the same server. This configuration requires multiple backup repositories, potentially one per job, and it can be useful to have them in the same Catalyst Store to get better deduplication"

so in the command instead of using <storeonce_fqdn> i use fc identifier(COFC), just like when adding the repository in the VEEAM?

regards,
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor » 1 person likes this post

Yes that's right; you should add the Catalyst store with COFC-XXX instead of the FQDN.
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

just to confirm we are talking about the same approach, you are referring, we can replace the FQDN by COFC-XXX in the below command to create repository in Veeam PowerShell, right?

Add-VBRBackupRepository -Name <repository_name> -Folder storeonce://<storeonce_fqdn>:<objectstore_name>@/<subfolder_name> -Type HPStoreOnceIntegration -StoreOnceServerName <storeonce_fqdn> -UserName <user> -Password <password>


regards,
LB.
david.domask
Veeam Software
Posts: 1315
Liked: 344 times
Joined: Jun 28, 2016 12:12 pm
Contact:

Re: Veeam+Storeonce

Post by david.domask » 2 people like this post

Just my input @Hirosh, I disagree with quite a few of HPE's points in that article, and I think they're conflating the convenience of these operations _for HPE_ as opposed to your convenience as an administrator :)

The article from HPE breaks down the proposed benefits of using the Powershell hack into these categories:

1. Better deduplication on the storeonce
2. Granularity for Catalyst Copies
3. Moving data between storages (data migration)
4. Workload balancing across gateways

Category 1 is questionable to me, as the same can be achieved with just a stock Catalyst Store without the Powershell hack; really, it's not implicitly better or worse unless you only consider making additional catalyst stores as a means of management, which is not really needed. I actually don't consider this point to be all that relevant as a vanilla catalyst store ingesting your backups will do exactly the same.

In fact, I think that the multiple repositories per Catalyst store is more about Category 2; granular catalyst copies. While I do agree this is useful and it would be quite convenient if it was supported for specific items with Catalyst Copy, we can already do this with normal Backup Copies, sometimes exceeding the performance of the Catalyst Copy; while there is some additional workload added this way, it's still quite viable and in use frequently with many of our clients using Storeonce devices.

Category 3 I think misses the fact that we have Backup Move in v12; HPE's article is a little out-date as it mentions the only way to move the data out would be Evacuate, but v12 handles this much more granularly now with Backup Move

Category 4 also shows a bit of older information, and I'm not quite sure how multiple repositories affect the statement HPE is making there :) With gateway pools in v12 and setting proxies per job, I think it would work out pretty well without multiple repositories.

So, I'm not trying to tell you "absolutely don't do this", but I'm not personally convinced on the benefits of this strategy.

Your last code is correct; get the COFC address just as it shows in the Storeonce UI for the Catalyst Store, it should work.
David Domask | Product Management: Principal Analyst
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

Hi Guys,

i have created a catalyst store(200TB) inside Storeonce ---> created 2 repositories inside that store and add it to Veeam Backup repositories through PowerShell. there is one thing which bothers me,in the command to create repositories in the VEEAM Powershell there was no parameter to set the SIZE. Now inside veeam backup repositories, the repositories that i created each show 200TB ,which mean 400TB combined, however they are in a single 200TB Store. i am afraid how Veeam would react or handle this , this seem like overprovisioning. Should i be worried that used capacity might try to surpass the real catalyst store capacity?


regards,
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor »

I wouldn't say that this is a big issue, as your retention and the amount of backed up data will define the storage consumption, not the available disk space on the repositories.
But if you want to be on the safe side, you could set a quota on the StoreOnce inside the catalyst store configuration. I'm not sure how Veeam will react if you, for example, set the quota to 190TB, but it maybe worth a try to prevent completely filling the system.
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

Hi Regnor,

Thank you for your comments.

setting quota is only possible in Storeonce(physical/Logical). I have already set physical limit which prevents the client(including deduplication) from using the whole capacity of the storeonce. the logical however is a mean to set the limit on how much data stored before deduplication, which does not help this scenario. I'm looking for a way to set the limit on how much data stored after deduplication. can anyone from software team share their comment with us?

regards,
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor »

I hope I don't misunderstand you, but this is what Physical Storage Quota is meant for:
The quota for the amount of data written to disk after deduplication.
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

@regnor that is true, OK lets change POV.

i have a single Catalyst store , I have 2 repositories inside. the repositories are added inside VEEAM,where the backup jobs are defined and managed. Now the physical quota i set in Storeonce is 200TB, in the VEEAM the capacity shown for each repository is 200TB. I want to manage the quota inside Veeam where the backups created so the Veeam Operator be aware of the capacity running out, otherwise each repository is showing 200TB and the operator will set the jobs to store around 200 TB of backup data on Repository A and another 200TB for repository B, this way both jobs can fail which the scenario which im trying to avoid.

regards,
LB.
Regnor
VeeaMVP
Posts: 947
Liked: 292 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Veeam+Storeonce

Post by Regnor »

Unfortunately there's no way to control that inside Veeam. Therefore you'll have to make sure everyone's aware of the real capacity.
mitchellm3
Influencer
Posts: 10
Liked: 8 times
Joined: Apr 12, 2016 8:08 pm
Contact:

Re: Veeam+Storeonce

Post by mitchellm3 » 1 person likes this post

Hirosh wrote: Jul 04, 2023 6:09 am Hi regnor,

in the following best practice published by HPE https://www.hpe.com/psnow/doc/a00023056enw the benefits are:

"Better deduplication—Each Catalyst Store is an independent deduplication domain. To enable cross-deduplication among multiple
backup repositories, it is possible to create them inside the same Catalyst Store. This is useful when we backup similar data to different
backup repositories.

• Catalyst Copy granularity—As described in the Veeam-managed HPE StoreOnce Catalyst Copy job section, the Veeam Catalyst Copy job
copies the contents of an entire backup repository to other HPE StoreOnce appliances. When multiple jobs write to the same backup
repository, the Catalyst Copy job will replicate the backup data of all backup jobs. There are situations where it is necessary to tailor the
replication parameters to the systems protected by specific jobs. This configuration requires multiple backup repositories—potentially one
per job—and it can be useful to have them in the same Catalyst Store to get better deduplication.

• Migration—Veeam provides an easy methodology for migrating entire backup repositories to new storage platforms. (See the Migrating
Veeam backup repositories to/from an HPE StoreOnce Catalyst Store section for details.) If a storage platform becomes full, and you want
to migrate a subset to new storage, then a solution design based on multiple backup repositories offers more flexibility than a solution
based on a single large backup repository.


• Manual workload balancing—Starting with Veeam Backup & Replication Version 12, manual workload balancing is generally not
necessary. Veeam Backup & Replication version 12 has an effective dynamic load balancing mechanism that can use multiple gateways to
concurrently access the same backup repository. Veeam Backup & Replication will try to use the gateway service running on the same
server running the proxy service. This optimization is useful to better distribute the workload for VMs inside the same job, assigning them
to different proxies and gateways, and at the same time, to avoid an extra hop in LAN that would occur when the proxy and gateway are
on different servers.If a manual workload balancing is required, it is possible to configure a static balancing, defining which proxy will run a job and which gateway will manage the backup repository. In this situation, it is important to assign the proxy and gateway services to the same server. This configuration requires multiple backup repositories, potentially one per job, and it can be useful to have them in the same Catalyst Store to get better deduplication"

so in the command instead of using <storeonce_fqdn> i use fc identifier(COFC), just like when adding the repository in the VEEAM?

regards,
LB.
I would rethink this approach. It does not take into consideration the background processes required to achieve your great dedupe and compression ratios. We break our days into 1/3 chunks. You need 8 hours to complete all your backups. 8 hours to replicate to your secondary targets and then 8 hours for background housekeeping processes. If you housekeeping does not finish in time the store will get larger and larger. Performance will degrade and you'll end up breaking things out into multiple stores. We keep all our catalyst stores at 50T max. At 25:1 that is still holding 1.25PB of data per store. Yes your dedupe/compression ratios will vary per store but don't overthink it. It will be great...unless you sized the appliance expecting a certain ratio that you must hit.
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh »

@mitchellm3 it would take a while till we reach 200TB, so the store capacity size is not the concern here, the concern is that since Veeam shows the size of every repository created inside the store same as the whole capacity of the store(over-provisioning) it would not be such a bad idea to also consider adding a feature to warn the user when the capacity of the repository is running out , anyway thank you for your comments i will definitely take them into account. could you elaborate why would you need 8 hours for background housekeeping?

regards,
LB.
mitchellm3
Influencer
Posts: 10
Liked: 8 times
Joined: Apr 12, 2016 8:08 pm
Contact:

Re: Veeam+Storeonce

Post by mitchellm3 » 2 people like this post

The housekeeping process is required to delete data and make sure things are deduplicated. The newer versions of storeonce with the SSD for the metadata have greatly improved speed but they still require the process. So if you want 90 days of backups, after 90 days the data will have to be purged. The storeonce will mark it for deletion and then clean-up with housekeeping. There are dedupe/compression processes happening as well. If lots of data comes in on one store and also needs to be purged that can overwhelm the housekeeping process. If the system is doing housekeeping while backups are running, your backups will start to slow down and then slow down housekeeping. You can see how that can cause issues.

So we do one backup job per catalyst store. That's also how we name the stores. Housekeeping is done on a per store basis. This way if one store is struggling with cleanup, the others generally won't be affected. It doesn't happen all the time but with the way we spread things out, we avoid most of those problems.
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam+Storeonce

Post by FedericoV » 1 person likes this post

Hello Team,
sorry to be late adding comments, pls (for all) send me an email when you see something where I can help: Federico.Venier@HPE.com.

Housekeeping
On the new firmware (>4.3.0) housekeeping is much faster than before, and I have never seen it to be an issue.
Developers decides to completely remove the old concept of blackout-window, which was used to define the period when the housekeeping was not permitted to run, and, as consequence, to give the full performance to backup. This was replaced by an automated housekeeping throttling based on the system production workload, which gives priority to backup processes, and then gives to housekeeping the remaining bandwidth only.
If the housekeeping backlog starts growing, then the throttling algorithm gives progressively more priority to the housekeeping reducing the backup/restore performance.
I was told that the housekeeping process is about 3X faster than before, so in normal situation the throttling will never kick in

StoreOnce global capacity and Catalyst Stores
StoreOnce doesn't partition the total capacity. The free capacity belongs to a unique common pool. Every CS takes capacity from that pool, and the housekeeping gives it back. For this reason the free capacity shown is the total capacity. Now, having deduplication, the story is rather tricky because the reported capacity is the physical capacity, but then StoreOnce makes deduplication so the logical data that is possible to write is much more than the reported free capacity. I have never seen this to create issues to Veeam.
At CS level, it is possible to specify Physical and Logical quota. This is going to change how capacity is reported by Veeam. For instance, if you create SOBRs on StoreOnce (which is something I tested, but I guess there aren't so many use cases where it is useful) you can use the Physical quota to "tell" Veeam where to migrate data when you evacuate a repository. In this case you give a smaller physical quota to the Catalyst Stores where you don't want data to be migrated.

Deduplication of Multiple Veeam Backup Repositories inside the same Catalyst Store vs on independent Catalyst Stores
Well, for StoreOnce, each CS is a deduplication domain. This isn't an issue because we know in normal situations, different sources have limited cross-deduplication. To be clear, in real world situations, most of the deduplication comes by the sub block chunking in StoreOnce, which is quite effective also on Incremental backups. For instance, when CBT selects 1MB to be saved because it contains at least 1byte of changed data, StoreOnce is still able to identify the ~4KB block containing the new data, deduplicating the remaining 1020KB (in the 1MB=1024KB)
There are easy to identify situations when cross deduplication is relevant. For instance, you have 3 grops of VMs: production, pre-production and test environments. In this case you have 3 jobs with different retention, and different repository-specific attributes:
Prod: Immutability =4 weeks, replication to 2 different remote sites
PreProd: Immutability 1 week and replication to 1 remote site
Test: no Immutability and no replication
Clearly, to make it possible you need 3 independent Backup Repositories because Immutability and replication are specifyed at BR level. At the same time we know that the 3 environments are rather similar in content, so there would be a good cross-deduplication. In this case it makes sense to create 3 BRs inside the same CS, rather than on different CSs. (This was what we had in our mind when we wrote the WP, sorry if that was not clear enough)

I hope this doesn't create more questions.

P.S.
1) Did you see that StoreOnce is the only deduplication appliance with in-line deduplication in the Veeam Ready for primary backup storage solutions! It is a model with just 11 disks in RAID6 + 1HS. This means that larger units are even faster.
2) On SO 4.3.6 released 2 weeks ago we added 2 factor authentication or MFA. It works with Google and Microsoft authenticators, and works also on dark sites where SO is not in Internet. It doesn't replace the Dual Authentication for Immutability-Compliance, but it adds an additional layer of protection, and simplify our job. For Instance, if you have the Security Officers accounts protected by MFA, you don't need to delete these accounts at their first use to avoid that an hacker could see the keys you press and steal usr and pwd, because even if they steal, they cannot use the credentials because protected by MFA. If you have questions we can open a new thread.
3) I received many feedbacks from users: the new Fixed-block chunking available starting with 4.3.2 and V12 is really much faster and doesn't use more capacity than the previous standard variable-segment-length chunking.

Thank you
Federico
Hirosh
Enthusiast
Posts: 76
Liked: 2 times
Joined: Dec 24, 2022 5:19 am
Full Name: Hirosh Arya
Contact:

Re: Veeam+Storeonce

Post by Hirosh » 1 person likes this post

@federico
Thank you, your comments are much appreciated and very helpful.
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests