-
- Influencer
- Posts: 13
- Liked: 2 times
- Joined: Feb 27, 2017 6:14 am
- Full Name: stahly
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
Hello,
are there any experiences with the machine in the meantime? Or even further optimizations or recommendations?
Thanks
are there any experiences with the machine in the meantime? Or even further optimizations or recommendations?
Thanks
-
- Veteran
- Posts: 602
- Liked: 89 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
They are running with XFS without any issues. Our servers are even oversized for amount of data we store, synthetic full with reflink safes a lot of diskspace.
-
- Service Provider
- Posts: 91
- Liked: 30 times
- Joined: Oct 31, 2021 7:03 am
- Full Name: maarten
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
I am wondering what your read speeds are when running 1 backup and run the tool diskspd at the same time?
My read speed without a running backup is like 1920 MiB/s as a output with diskspd and when a backup is running this completely drops to like 8 MiB/s and is killing the SOBR off loads. Seeing this on two Apollo 4510 (full capacity) with Server 2019 REFS.
My read speed without a running backup is like 1920 MiB/s as a output with diskspd and when a backup is running this completely drops to like 8 MiB/s and is killing the SOBR off loads. Seeing this on two Apollo 4510 (full capacity) with Server 2019 REFS.
-
- Veteran
- Posts: 602
- Liked: 89 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
As I'm using Linux with XFS I don't have those problems
-
- Service Provider
- Posts: 501
- Liked: 124 times
- Joined: Apr 03, 2019 6:53 am
- Full Name: Karsten Meja
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
is it that simple today? XFS > ReFS?
-
- Veteran
- Posts: 602
- Liked: 89 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
It's not that simple as you have to replace you Windows box with ReFS with a Linux with XFS. I've never used ReFS but XFS was running without issues from the beginning and this did not change.
-
- Product Manager
- Posts: 14914
- Liked: 3109 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
reading the forum feedback on both, it sounds like "yes".is it that simple today? XFS > ReFS?
But it could also be a coincidence. For example XFS users could maybe use better hardware. Overall, it looks like XFS block cloning is rock-stable (I talked to support some time ago and we have zero tickets around issues that existed with REFS)
-
- Veeam Software
- Posts: 148
- Liked: 38 times
- Joined: Jul 28, 2022 12:57 pm
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
running in production since many months, I'm glad I replaced my old dedup appliance by this. The gain in performance, security and stability is really significant. I haven't even had an increase in stored data...
Just faced some weirds behaviors with Veeam transport Daemon. Too many fork process from veeam agent when creating metadata, it had blocked veeam jobs.
I deployed a registry key provided by veeam support which decreased memory usage on repo. I didn't noticed any drop in performance.
the following week i had one repo with Veeam Transport blocked who took all cpu and took load to the moon.
Anyway i had a script from veeam support to generate more traces, if we encounter new problems. I hope not
Veeam support case if veeam pm teams wonders. (#05546344)
Just faced some weirds behaviors with Veeam transport Daemon. Too many fork process from veeam agent when creating metadata, it had blocked veeam jobs.
I deployed a registry key provided by veeam support which decreased memory usage on repo. I didn't noticed any drop in performance.
the following week i had one repo with Veeam Transport blocked who took all cpu and took load to the moon.
Anyway i had a script from veeam support to generate more traces, if we encounter new problems. I hope not
Veeam support case if veeam pm teams wonders. (#05546344)
Bertrand / TAM EMEA
-
- Novice
- Posts: 3
- Liked: 7 times
- Joined: Jun 03, 2014 7:40 pm
- Full Name: Dustin
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
Bringing up and old conversation on this. When testing this setup with the Apollo's, did you have backup, and repo converged onto the same platform. Or, did you have a separate backup server, and use the Apollo as a separate storage repo?
-
- Chief Product Officer
- Posts: 31905
- Liked: 7402 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
Kindly read the topic before asking questions. The answer is in the first sentence of the first reply...
-
- Service Provider
- Posts: 42
- Liked: 5 times
- Joined: Sep 24, 2012 11:11 am
- Full Name: Ruben Gilja
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
Hi guys,
We are in the process of swapping HPE 3PAR storage for an Apollo 4510 in our Cloud Connect environment. We have usually had LUNs on 64-100TB of size exported to Repo servers and then had Extents in SOBR the same size. This has been very easy for us to replace extent by extent, whenever hardware is out of support and we need to bring new hardware into SOBR.
Right now we have an Apollo 4510 with 2 x Xeon Gold 6230R, 256GB memory, 48 x 20TB drives and 1 x HPE Smart Array P408ip SR Gen 10 Raid Controller. We have used this thread as a reference for a design, to some extent. But I still have some unanswered questions I hope you guys can provide some input on.
We have at this point created 2 x RAID 60 with 256KB stripe size which gives us two drives with 327TiB of available space each, formatted with REFS (64K).
We are a bit uncertain on how to partition this in Windows. Having 2 large partitions with 2 extents, or maybe partition a volume into 3 so that we get 3 extents per RAID60. 6 Extents in totalt on roughly 100TB each.
The reason we are thinking of doing this is for future hardware replacement. It's less impact to seal just 1 extent at a time, triggering full backup of 100TB at a time, rather than 300TB at a time.
Any thoughts on this or pro/cons on this matter?
We are in the process of swapping HPE 3PAR storage for an Apollo 4510 in our Cloud Connect environment. We have usually had LUNs on 64-100TB of size exported to Repo servers and then had Extents in SOBR the same size. This has been very easy for us to replace extent by extent, whenever hardware is out of support and we need to bring new hardware into SOBR.
Right now we have an Apollo 4510 with 2 x Xeon Gold 6230R, 256GB memory, 48 x 20TB drives and 1 x HPE Smart Array P408ip SR Gen 10 Raid Controller. We have used this thread as a reference for a design, to some extent. But I still have some unanswered questions I hope you guys can provide some input on.
We have at this point created 2 x RAID 60 with 256KB stripe size which gives us two drives with 327TiB of available space each, formatted with REFS (64K).
We are a bit uncertain on how to partition this in Windows. Having 2 large partitions with 2 extents, or maybe partition a volume into 3 so that we get 3 extents per RAID60. 6 Extents in totalt on roughly 100TB each.
The reason we are thinking of doing this is for future hardware replacement. It's less impact to seal just 1 extent at a time, triggering full backup of 100TB at a time, rather than 300TB at a time.
Any thoughts on this or pro/cons on this matter?
-
- Technology Partner
- Posts: 36
- Liked: 38 times
- Joined: Aug 21, 2017 3:27 pm
- Full Name: Federico Venier
- Contact:
Re: Veeam v11 - HPE Apollo 4510 test
I see you have clear ideas and priorities about your goals. Maybe I would make a different configuration, but I'm not sure it is better for your priorities.
An extent is a backup repository, not necessarily a volume. I would distribute prod systems/VM across jobs with different backup repository within the same volume. I would create less volumes, if possible only one. This way there is no storage fragmentation, i.e. a volume that is full, while other still have available capacity.
A single P408i doesn't provide the max throughput the system is capable. I have seen the best performance is possible with 2 controllers, each one with 29 connected disks, as 1 RAID60 with 2 * RAID 6 of 14 disks, plus an hot spare. In this situation the physical write throughput is 6GB/s in total. If you have a 2:1 compression, you can write backup at 12GB/s. These are hero numbers, but in my lab I reproduced this throughput constantly at every test. Clearly, connectivity and source storage need to be fast enough.
With the above configuration, the best strip size is 256KB. Pay attention to create the RAID-Volumes in off-line mode, otherwise the process takes longer and causes lower performance until it completes.
Smart-Array configuration: - Assign 90% of the controller cache to Write and 10% to Read (maybe less than 10% works too, but i didn't test it). Select “Predictive Spare Activation”. Select “Auto Replace Drive. When activated, selected drives will automatically become part of the array”.
The above configuration has 2 volumes, one per controller. I suggest to create 2 REFS, one per volume. In my lab I created a SOBR with 2 extents, one per volume, but this way they are quite big extents. I have seen V12 offers additional migration options. I'm not sure they can replace the SOBR maintenance + evacuate. For ransomware protection, I would consider also the hardened Linux configuration.
An extent is a backup repository, not necessarily a volume. I would distribute prod systems/VM across jobs with different backup repository within the same volume. I would create less volumes, if possible only one. This way there is no storage fragmentation, i.e. a volume that is full, while other still have available capacity.
A single P408i doesn't provide the max throughput the system is capable. I have seen the best performance is possible with 2 controllers, each one with 29 connected disks, as 1 RAID60 with 2 * RAID 6 of 14 disks, plus an hot spare. In this situation the physical write throughput is 6GB/s in total. If you have a 2:1 compression, you can write backup at 12GB/s. These are hero numbers, but in my lab I reproduced this throughput constantly at every test. Clearly, connectivity and source storage need to be fast enough.
With the above configuration, the best strip size is 256KB. Pay attention to create the RAID-Volumes in off-line mode, otherwise the process takes longer and causes lower performance until it completes.
Smart-Array configuration: - Assign 90% of the controller cache to Write and 10% to Read (maybe less than 10% works too, but i didn't test it). Select “Predictive Spare Activation”. Select “Auto Replace Drive. When activated, selected drives will automatically become part of the array”.
The above configuration has 2 volumes, one per controller. I suggest to create 2 REFS, one per volume. In my lab I created a SOBR with 2 extents, one per volume, but this way they are quite big extents. I have seen V12 offers additional migration options. I'm not sure they can replace the SOBR maintenance + evacuate. For ransomware protection, I would consider also the hardened Linux configuration.
Who is online
Users browsing this forum: No registered users and 44 guests