Comprehensive data protection for all workloads
stahly
Novice
Posts: 9
Liked: 2 times
Joined: Feb 27, 2017 6:14 am
Full Name: stahly
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by stahly »

Hello,

are there any experiences with the machine in the meantime? Or even further optimizations or recommendations?

Thanks
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx » 1 person likes this post

They are running with XFS without any issues. Our servers are even oversized for amount of data we store, synthetic full with reflink safes a lot of diskspace.
MaartenA
Service Provider
Posts: 70
Liked: 24 times
Joined: Oct 31, 2021 7:03 am
Full Name: maarten
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by MaartenA »

I am wondering what your read speeds are when running 1 backup and run the tool diskspd at the same time?
My read speed without a running backup is like 1920 MiB/s as a output with diskspd and when a backup is running this completely drops to like 8 MiB/s and is killing the SOBR off loads. Seeing this on two Apollo 4510 (full capacity) with Server 2019 REFS.
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx »

As I'm using Linux with XFS I don't have those problems :)
karsten123
Service Provider
Posts: 370
Liked: 82 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by karsten123 »

is it that simple today? XFS > ReFS?
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx » 1 person likes this post

It's not that simple as you have to replace you Windows box with ReFS with a Linux with XFS. I've never used ReFS but XFS was running without issues from the beginning and this did not change.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by HannesK »

is it that simple today? XFS > ReFS?
reading the forum feedback on both, it sounds like "yes".

But it could also be a coincidence. For example XFS users could maybe use better hardware. Overall, it looks like XFS block cloning is rock-stable (I talked to support some time ago and we have zero tickets around issues that existed with REFS)
bct44
Veeam Software
Posts: 110
Liked: 29 times
Joined: Jul 28, 2022 12:57 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by bct44 » 1 person likes this post

running in production since many months, I'm glad I replaced my old dedup appliance by this. The gain in performance, security and stability is really significant. I haven't even had an increase in stored data...

Just faced some weirds behaviors with Veeam transport Daemon. Too many fork process from veeam agent when creating metadata, it had blocked veeam jobs.
I deployed a registry key provided by veeam support which decreased memory usage on repo. I didn't noticed any drop in performance.
the following week i had one repo with Veeam Transport blocked who took all cpu and took load to the moon.

Anyway i had a script from veeam support to generate more traces, if we encounter new problems. I hope not :)

Veeam support case if veeam pm teams wonders. (#05546344)
halvorsond
Lurker
Posts: 1
Liked: never
Joined: Jun 03, 2014 7:40 pm
Full Name: Dustin Halvorson
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by halvorsond »

Bringing up and old conversation on this. When testing this setup with the Apollo's, did you have backup, and repo converged onto the same platform. Or, did you have a separate backup server, and use the Apollo as a separate storage repo?
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

Kindly read the topic before asking questions. The answer is in the first sentence of the first reply...
rubeng
Service Provider
Posts: 42
Liked: 5 times
Joined: Sep 24, 2012 11:11 am
Full Name: Ruben Gilja
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by rubeng »

Hi guys,

We are in the process of swapping HPE 3PAR storage for an Apollo 4510 in our Cloud Connect environment. We have usually had LUNs on 64-100TB of size exported to Repo servers and then had Extents in SOBR the same size. This has been very easy for us to replace extent by extent, whenever hardware is out of support and we need to bring new hardware into SOBR.

Right now we have an Apollo 4510 with 2 x Xeon Gold 6230R, 256GB memory, 48 x 20TB drives and 1 x HPE Smart Array P408ip SR Gen 10 Raid Controller. We have used this thread as a reference for a design, to some extent. But I still have some unanswered questions I hope you guys can provide some input on.

We have at this point created 2 x RAID 60 with 256KB stripe size which gives us two drives with 327TiB of available space each, formatted with REFS (64K).

We are a bit uncertain on how to partition this in Windows. Having 2 large partitions with 2 extents, or maybe partition a volume into 3 so that we get 3 extents per RAID60. 6 Extents in totalt on roughly 100TB each.

The reason we are thinking of doing this is for future hardware replacement. It's less impact to seal just 1 extent at a time, triggering full backup of 100TB at a time, rather than 300TB at a time.


Any thoughts on this or pro/cons on this matter?
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by FedericoV »

I see you have clear ideas and priorities about your goals. Maybe I would make a different configuration, but I'm not sure it is better for your priorities.
An extent is a backup repository, not necessarily a volume. I would distribute prod systems/VM across jobs with different backup repository within the same volume. I would create less volumes, if possible only one. This way there is no storage fragmentation, i.e. a volume that is full, while other still have available capacity.
A single P408i doesn't provide the max throughput the system is capable. I have seen the best performance is possible with 2 controllers, each one with 29 connected disks, as 1 RAID60 with 2 * RAID 6 of 14 disks, plus an hot spare. In this situation the physical write throughput is 6GB/s in total. If you have a 2:1 compression, you can write backup at 12GB/s. These are hero numbers, but in my lab I reproduced this throughput constantly at every test. Clearly, connectivity and source storage need to be fast enough.
With the above configuration, the best strip size is 256KB. Pay attention to create the RAID-Volumes in off-line mode, otherwise the process takes longer and causes lower performance until it completes.
Smart-Array configuration: - Assign 90% of the controller cache to Write and 10% to Read (maybe less than 10% works too, but i didn't test it). Select “Predictive Spare Activation”. Select “Auto Replace Drive. When activated, selected drives will automatically become part of the array”.
The above configuration has 2 volumes, one per controller. I suggest to create 2 REFS, one per volume. In my lab I created a SOBR with 2 extents, one per volume, but this way they are quite big extents. I have seen V12 offers additional migration options. I'm not sure they can replace the SOBR maintenance + evacuate. For ransomware protection, I would consider also the hardened Linux configuration.
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 133 guests