Hi all
Is anyone using Hitachi HCP onsite "Veeam Ready - Object" storage as a target?
We're just completing the install of a 2PB 8x G11 node HCP cluster and we would like some ideas and guidance on how to slice up the single HCP tenant we're creating. Logic would say 8x 250TB Name Spaces and having a single full capacity S3 buckets on each, then have those 8 as extents in a Scale-Out Backup Repository (our current licence limits us to 3 extents + 1 in maintenance, but we're due to renew out lic later this year).
We have 4 fairly beefy G10/G10+ DL380s with dual Gold CPUs and 512GB RAM proxy servers to drive this.
Does that sound OK, would it make sense to do something different?
We've obviously got meetings with Hitachi lined up and when I asked Veeam the Senior Solutions Architect said "The recommendations on the configuration should come from Hitachi, as any limitations will be on the storage side"... I understand why, but was hoping for a bit more guidance, so here I am.
Thanks, Stu.
-
- Enthusiast
- Posts: 28
- Liked: 8 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
-
- Chief Product Officer
- Posts: 31535
- Liked: 7053 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Hitachi HCP as an S3 target
Hi, Stu. Not a user myself but from Veeam perspective this design sounds perfect: multiple reasonably sized buckets in a SOBR. Thanks
Who is online
Users browsing this forum: ferrus and 22 guests