Comprehensive data protection for all workloads
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Hello everyone

I read Gostev weekly post
So in this lab, in absolute numbers, we were able to peak at 11.4 GB/s backup speed with a single all-in-one backup appliance! And yes, these are bytes not bits, not a typo! It's pretty incredible what our v11 engine is capable of, right? I will leave the exact backup server configuration below for a reference. Of course, this does not mean such performance can ONLY be achieved on HPE hardware, as Veeam is completely hardware-agnostic... but you ARE guaranteed to achieve these numbers on this specific hardware, if your source can keep up! Listing only directly relevant stuff:

HPE Apollo 4510 Gen10
2x Intel Xeon Gold 6252 CPU @ 2.1GHz (24 cores each)
12x 16GB DIMM (192GB RAM total)
2x 32Gb/s FC; 2x 40GbE LAN
58x 16TB SAS 12G HDD
2x HPE Smart Array P408i-p Gen10
2x RAID-60 with 128KB strip size on 2x (12+2) + 2 hot spares (575TB usable)
Windows Server 2019 with ReFS 64KB cluster size
45 VMs with a total of 5.5TB used space + backup encryption enabled + per-VM backup chains enabled
I want to ask which proxy type used this test and how many proxies are deployed?
What is the compression and storage optimization settings?
How many concurrent jobs is working or one single backup job?

And most important what if we could use XFS? (Because of Immutability) What is the best RAID card strip size? (W2019 128K this example)
We are planning to migrate all ReFS datastores to XFS because of immutability so we are glad to learn Linux XFS tests results.
BTW we are sold similar configuration to customer only difference is cpu model, 1 cpu and ethernet card (10/25)

Thank you.
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

Hi, Mehmet

This whole story was about "all-in-one" backup appliance so all Veeam components ran on this single box, including one default backup proxy. So basically a minimal Veeam install you get by default.

There was a single backup job to SOBR made out of extents provided by this server. All SOBR and job settings were at their defaults, except as noted the backup file compression was enabled (which in theory adds a bit more CPU load). Transport mode was backup from storage snapshots from two arrays: HPE 3PAR/Primera and HPE Nimble, over FC and over LAN respectively.

Here's the screenshot from the run where 11.4 GB/s backup speed was reached. Processing rate is lower at 10.3 GB/s because it includes "dead" time of job initialization when no data transfer is happening.

I will need someone to test XFS on a similar configuration before I can give any recommendations there :) again, the scope of this particular testing was all-in-one backup appliance, meaning a single Windows Server box for everything... so no place for XFS here!

Hope this helps!
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Thanks Anton!

I want to ask, you said "There was a single backup job to SOBR made out of 4 extents provided by this server."

I think you create 4 volume in RAID card. 575/4= 143 TB volumes. Is that right?

My customer buy two 575 TB Apollo 4510 Gen10. With this logic we may create 8 SOBR extent, right?
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

That would be my guess too. I don't know why they set it up this way, as opposed to having 1 volume per RAID controller. Could have been HPE-specific best practices, I will ask someone from Veeam who took part in the testing to comment on this part.
Alexey.Strygin
Veeam Software
Posts: 77
Liked: 12 times
Joined: Jun 17, 2010 7:06 am
Full Name: Alexey Strygin
Location: FL, USA
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Alexey.Strygin »

The appliance has two P408i RAID controllers, each controller had 28X12TB NL-SAS drives in RAID 60 configuration. Each RAID60 group has one 288TB Volume. There are two volumes in SOBR = 576TB total capacity. The encryption was also enabled in order to test the Max CPU load, and during the peak performance, it was running at around 70%. You can also use 16TB NL-SAS drives which will increase total server capacity to 768TB and provide equal performance.
Alexey Strygin
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Thanks Alexey!
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

OK, after chatting with Alexey offline it looks like the test setup PPT I got from HPE had a mistake in HDD size. With that in mind, usable capacity numbers make much more sense now!

Corrected numbers below:

58 HDD total (12TB)
56 HDD usable (minus 2 hot spares)
28 HDD (half) per each RAID controller

RAID60 with dual parity:
28 HDD - 4 HDD for parity = 24 HDD x 12TB = 288TB usable

SOBR with 2 extents:
2 x 288TB = 576TB total capacity

With 16TB HDDs, total usable capacity would have been 768TB (with the same performance). Wow!
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx »

Gostev wrote: Feb 09, 2021 1:51 pm I will need someone to test XFS on a similar configuration before I can give any recommendations there :) again, the scope of this particular testing was all-in-one backup appliance, meaning a single Windows Server box for everything... so no place for XFS here!
I hope there is someone at Veeam that knows XFS too ;) It'd really be nice to see these numbers/setups not only for Windows, but for Linux/XFS too.
NightBird
Expert
Posts: 244
Liked: 57 times
Joined: Apr 28, 2009 8:33 am
Location: Strasbourg, FRANCE
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by NightBird »

384GB of memory for the Apollo and not 192GB, isn't it ? We can see it from the screenshot on the bottom right side ;)

What about smallers configurations (Apollo 4200, R740xd2 ) ? can we expect some improvment too ?
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx »

Alexey.Strygin wrote: Feb 09, 2021 2:44 pm The appliance has two P408i RAID controllers, each controller had 28X12TB NL-SAS drives in RAID 60 configuration. Each RAID60 group has one 288TB Volume. There are two volumes in SOBR = 576TB total capacity. The encryption was also enabled in order to test the Max CPU load, and during the peak performance, it was running at around 70%. You can also use 16TB NL-SAS drives which will increase total server capacity to 768TB and provide equal performance.
OS was installed on internal M.2 disks?
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

@NightBird 192GB for sure, may be there were a couple of appliance configs they used in the process. In fact, Federico has just sent us his thoughts on how even 192GB is too much for v11 engine based on his physical RAM consumption monitoring. And that yet, there's no way to get lower unfortunately, because having exactly 12 DIMMs is extremely important for performance - while the next step down is quite sharp due to the available RAM module sizes.

@pirx OS and (importantly) the instant recovery cache folder were on 800GB volume backed by:
2x 800GB SATA SSD on HPE Smart Array P408i-a Gen10 using RAID-1 (mirror)
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by FedericoV » 10 people like this post

Please, forgive my delay.

How much memory?
V11 is simply awesome with RAM utilization. If you are running V10, open Windows Resource Monitor on the Memory tab to monitor the RAM utilization. When your server is working at its maximum speed, you can see that the Orange ("Modified") portion of the bar often takes half of the available RAM. On V11 that orange segment is practically invisible.
During my tests, running up to 45 concurrent backup streams in "per-VM file" mode, I have seen the memory utilization normally below 45GB. Only in rare situations I have seen the utilization going above 100GB. For this reason, I precautionarily suggest to install a little more than 100GB. The HPE Apollo 4510 Gen10 gives the best performance when there are exactly 12 DIMMs (6 per CPU).
My recommended RAM configuration is 12 * 16GB=192GB. Maybe 12*8GB=96GB is enough, but the savings are not worth the risk of slowdowns.

How many Disk-array controllers?
This configuration has 2 "HPE Smart Array P408i-p SR Gen10" controllers. I have tested that a single controller cannot write more than 3GB/s. For this reason, I installed two controllers, and this gave me about 6GB/s of write throughput (remember, data is compressed and deduped, so the backup speed is about 2x higher). With 2 controllers it is possible to assign 30 spindles to each one. Each controller has one RAID60 on 2 RAID6 parity-groups of 14 disks each. Each controller is managing 29 disks: 28 for data plus one Hot Spare.
In total there are 58 x "HPE 16TB SAS 12G 7.2K LFF" for a net usable capacity of 768TB

Why RAID60 instead of 2 RAID6 per controller, or simply a larger RAID6?
A RAID60 is a HW-managed stripe of multiple RAID6 groups. It is as fast as the sum of its subcomponent and offers an effective load balancing across more disks. With large 16TB spindles, when a disk fails, it takes several hours to rebuild it. A RAID6 survives to 2 disk failures, so if a second disk fails during the rebuild period, there is no data loss. It is a common practice to not create RAID6 groups larger then 14-16 disks to reduce the risk of multiple concurrent disk failures, and also to reduce the rebuild impact to the overall performance.
This configuration also includes 2 SSDs in mirror that are installed on the CPU blade and are connected to a third P408i-a controller. The 2 SSDs are intended for the OS and for the Veeam vPower NFS cache.

What is the best RAID strip size?
This controller gives the option to set the strip size, and this value has an impact on the overall performance. I would like to say that there is a math rule to find the best value, but this is influenced by so many variables that the best way to find the fastest strip size is... testing all the possible settings. Yes, it takes time, but you have someone who did it for you :-)
The fastest strip size is 256K, but 128K is close to it.

What is the best configuration for the Controller battery-protected cache
Each controller has 4GB of cache including a battery for writing the content to flash in case of power loss. This cache is actively used to optimize physical write operations to disks and is a key element for performance. In my tests, I assigned 95% of the cache to write and 5% to read.

How many File Systems?
Here we have 2 options
  • Option 1) 2 file systems, one per volume, grouped by Veeam Scale Out Backup Repositories (SOBR). Each FS is formatted as ReFS with 64KB pages. This option is usually preferrable because it is 15% faster. This is my preferred configuration.
  • Option 2) 1 file systems. The two volumes are groped in a single Windows striped volume. The resulting single volume is formatted as ReFS with 64KB pages This option is a little bit easier to manage, but it is slower and the striping layer is an additional potential point of failure.
What is the best backup block size for performance?
This is controlled by "Storage optimization" (Veeam backup Job --> Storage section --> Advanced setting --> Storage tab --> “Storage optimization” field).
“Local target (large blocks)” is about 8% faster than standard “Local target”.
With “local target” blocks, the Incremental backup size usually requires a little less capacity. There isn’t a clear winner on this setting, and both options are usable. The best one depends on your source data and if "local target" produces a significantly smaller incremental. My personal preference is for “Local target (large blocks)”.

Use per-VM backup files: yes or no?
Modern systems need multiple write streams to run fast. On HPE Apollo 4510 Gen10 each VBR write stream runs at 1GB/s, and this generates a backup speed of about 2GB/s when the compression effect is 2:1. It is necessary to have at least 7-15 concurrent stream to run backups at 10GB/s.
  • If you do not use the "per VM-backup files" then make sure to have your workload distributed in multiple jobs and have about 10 jobs running concurrently.
  • If you use the "per VM-backup files" then everything is easier because each VM-backup generates its own write streams and we just need at least 10 VMs for running at the maximum speed.
    Is there a maximum number of concurrent streams before the throughput starts slowing down? I do not know the answer, I can say that I tested a job with 45 VMs and it run for 10 mins at an average speed of 10.3GB/s with a 2.1x reduction (and then it slowed down because there were no other VM to process).
Are there smaller storage-optimized servers and smaller/scalable configuration options?
The 4U HPE Apollo 4510 Gen10 has a 2U brother, the Apollo 4200 Gen10. This server has 2 front drawers with 12 LFF disks, plus a rear cage for other 4LFF disks. In total there are 28 LFF slots.
On the Veeam V11 optimized configuration, the Apollo 4200 provides up to 320TB net usable.
There are multiple configuration options, based on smaller disks (the most common are 8,10,12,14,16TB), or with the internal disk slots only 50% populated, and ready for a future upgrade.
In the next few days, I'll complete my tests on the Apollo 4200 Gen 10 with V11, and I'll post an update on the performance.

A personal note: a 30% increase in performance would have been defined exceptional for a solution that Gartner already ranks first for ability to execute, but here we are facing a doubling of performance between V10 and V11 and I don't know how it can be defined.

P.S. Thank you for your interest on my lab results.
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx »

We have two backup SOBR's at two locations (200TB each), the job are copied to two copy SOBR (~1PB each, 14 days/10 weeks retention). According to VSE/RPS calculators, we would need ~800TB overall with XFS/ReFS. This would fit nicely on two of these "monster" server.

I've no experience with ReFS/XFS SOBR Repos in Veeam. In what possible limits could I run according to space? If I understand correctly I can add the two filesystems of one server as extents and due to data locality policy this should not impact space savings, right? What if the space of this one server is not sufficient and I need to scale out? Can I use a fs of an other ReFS server as additional extent (I guess it's the same as an other fs on the same server)? After reading https://helpcenter.veeam.com/docs/backu ... ml?ver=100 this is not clear to me.
CraigTas
Service Provider
Posts: 4
Liked: 2 times
Joined: Jul 08, 2019 12:18 am
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by CraigTas » 2 people like this post

We have 3 of these apollo's (2 for primary copies and one in a 3rd DC for copy jobs) in our environment for the past 2 years, after much testing and tuning we came up with almost the exact design above. I can add a couple of things from our "field experience".

1) We used 60TB ReFS volumes so that it is below the Microssoft VSS limit of 64Tb, just in case you every need to use (we did for a veeam agent job)
2) This is a big ONE - DO NOT EXPAND a raid group as it disables the write cache for the duration of the raid expansion (in our case it took 3-4 days to add an extra parity group), during this whole time the Apollo was unusable as performance dropped to < 10% of what it was. We now have a policy of buying them fully populated or growing them in new arrays only, set and forget the array. This "feature" is documented in the specs for the raid controller, but you need to hunt for it. Playing with the transformation settings made zero difference.
3) Failed drives DO NOT disable the write cache and performance is ~halved during rebuild, rebuild times are <24 hours (we use 10Tb and 12Tb disks), with the apollo sitting idle for ~16 hours a day. I have ATTO bechmark screenshots if anyone is interest during: Idle, "Failed Disk" and "During rebuild"
4) We use the TPM chip and bitlocker on all volumes for added protection and didn't seem to impact performance.

Hope this helps someone.
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by FedericoV »

Pirx,
you are right, SOBR are designed to aggregate smaller repositories on the same, or on different servers.
If you need more than 768TB, you can deploy a second Apollo 4510 and add its 2 extents to the same SOBR. The other option is to create 2 different SOBRs, and have different jobs writing to the first or the second repository.
Another advantage of the Apollo based solution, is that you can design configurations where the Proxy and the Repository are on the same server to avoid additional hop in LAN.
In my tests I have seen that ReFS-based synthetic full backups are almost as fast as Incremental ones, and they use the same capacity of an incremental.
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Hello Frederico

Thank you for detailed comment. Will you also do a test on Linux XFS? ReFS great but with because of Immutability customers plan to migrate their data to Linux XFS.
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx » 1 person likes this post

FedericoV wrote: Feb 15, 2021 8:32 am How many Disk-array controllers?
This configuration has 2 "HPE Smart Array P408i-p SR Gen10" controllers. I have tested that a single controller cannot write more than 3GB/s. For this reason, I installed two controllers, and this gave me about 6GB/s of write throughput (remember, data is compressed and deduped, so the backup speed is about 2x higher). With 2 controllers it is possible to assign 30 spindles to each one. Each controller has one RAID60 on 2 RAID6 parity-groups of 14 disks each. Each controller is managing 29 disks: 28 for data plus one Hot Spare.
In total there are 58 x "HPE 16TB SAS 12G 7.2K LFF" for a net usable capacity of 768TB
@FedericoV I checked Intels current list of CPU's and think a Xeon Gold 6230R with 26 cores instead a 6252 would be cheaper and better now. Regarding the FC and LAN ports, I think each of those are dual port adapters and it would not be possible to add more adapters to get FC/LAN redundancy on server side because of number of PCI-e slots?

+1 for a Apollo + Linux + XFS/relinks test!
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by FedericoV »

@Prix
you are right for the CPU. The relatively old Intel Xeon Gold 6252 was the CPU I had in the system when I run my tests, and I like to be honest when I show screenshots.
The CPU I currently recommend is the Intel Xeon-Gold 5220R (2.2GHz/24-core/150W). The 5230R is even better. I have seen that with 2*24 cores the CPU is not a bottleneck even with the encryption active on the Backup Job.
Another difference between the original configuration and the new recommended one is the LAN: the new NIC for Apollo supports 10/40/50Gb/s, and 50 is better than 40.

Testing Linux + XFS is my top priority too. What Linux distribution would be your best candidate?

Thank you
Federico
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx »

@FredericoV

Thanks, can you tell something about the available IO slots? From the quickspecs it looks like there is 1x FlexLOM slot and 3x PCI-e slots. I'm not sure if they are all available in this setup.

- 2 x 1 GbE onboard
- two P408i-p array controller need two separate PCI-e slot? Would it make sense to choose one P816i instead of two p408i, this model is not mentioned in the quickspec.
- a E208i-p for system disk does not need a PCI-e slot as far as I can see
--> this means 2 slots are available, eg. additional 1x FC and 1x LAN adapter

Our default policy is that we have redundancy on switch, as well as adapter side. So I've to think about the available options.


https://h20195.www2.hpe.com/v2/getdocum ... 0021866enw
I/O Module
Notes: One I/O module is required
HPE Apollo 4500 Gen10 CPU0 x2/CPU1 x2 FIO I/O Module 882020-B21
Notes: HPE Apollo 4500 Gen10 CPU0 x2/CPU1 x2 FIO I/O Module (882020-B21) contains one FlexLOM and one x16 PCIe
slot from Proc 1 and two x16 PCIe slots from Proc 2.
HPE Apollo 4500 Gen10 CPU0 x3/CPU1 x1 I/O Module P00416-B21
Notes: HPE Apollo 4500 Gen10 CPU0 x3/CPU1 x1 I/O Module (P00416-B21) contains one FlexLOM and two x8 PCIe slot
from Proc 1 and one x16 PCIe slots from Proc 2.
pirx
Veteran
Posts: 598
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by pirx » 1 person likes this post

Regarding Linux, our default is RHEL, not sure if this is already best choice for xfs with reflinks.
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Hello Frederico

I think Ubuntu 20.04 is best candidate.
sebastien_sru
Lurker
Posts: 1
Liked: never
Joined: Mar 01, 2021 9:58 am
Full Name: Sebastien RUELLE
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by sebastien_sru »

Hello,

It's my target, test it in RHEL 8.3 in my bench.
so what's is the best way?
raid size : 64 ko ?
how many RAM by Terat Bytes?
LV Max size ? 288 To

does it possible to map LUN (datastores) over SAN FC to do LAN Free Backup?
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

What would be the reasoning for using different settings e.g. for RAID cluster size than those specified above? The workload is still the same, even if the repository is now Linux-based.

The only potential deviation I see is reducing RAM to the next "step" which is 12 x 8GB = 96GB, simply because the server will no longer be running the backup server role, which removes RAM consumption from job manager processes.

I expect Linux to have all the same capabilities for mounting LUN (datastores) over SAN FC as Windows. At least I did this with iSCSI SAN in my lab 15 years ago via open-iscsi.
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

From my perspective, i want to see 4K XFS (Because of reflink) vs ReFS 64K performance difference.

Also i want to know best practise RAID card strip size for XFS.
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

crackocain wrote: Mar 01, 2021 10:23 amFrom my perspective, i want to see 4K XFS (Because of reflink) vs ReFS 64K performance difference.
You mean this? :D
crackocain wrote: Mar 01, 2021 10:23 amAlso i want to know best practise RAID card strip size for XFS.
Not sure if you missed my previous post? Basically, you want to match RAID strip size to the typical I/O size. And since Veeam uses the same block size regardless of the target, why would stripe size recommendation be any different on Linux comparing to Windows?
crackocain
Service Provider
Posts: 246
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by crackocain »

Thanks Gostev. XFS performance amazing.
.
For strip size Frederico wrote;
What is the best RAID strip size?
This controller gives the option to set the strip size, and this value has an impact on the overall performance. I would like to say that there is a math rule to find the best value, but this is influenced by so many variables that the best way to find the fastest strip size is... testing all the possible settings. Yes, it takes time, but you have someone who did it for you :-)
The fastest strip size is 256K, but 128K is close to it.
Frederico wrote 256K but XFS is 4K. Veeam also 4MB large blocks. I think huge difference this numbers. My thinking style may be wrong.
But ReFS 64K. Much closer to 256K raid strip size. So it makes you think it could be faster.
After seeing the numbers in the test, I don't think we will have a problem with this issue.
Gostev
Chief Product Officer
Posts: 31605
Liked: 7095 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by Gostev »

I think you are confusing file system cluster size and RAID stripe size. There's absolutely no connection between the two.
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by FedericoV »

pirx wrote: Feb 28, 2021 8:21 am @FredericoV

Thanks, can you tell something about the available IO slots? From the quickspecs it looks like there is 1x FlexLOM slot and 3x PCI-e slots. I'm not sure if they are all available in this setup.

- 2 x 1 GbE onboard
- two P408i-p array controller need two separate PCI-e slot? Would it make sense to choose one P816i instead of two p408i, this model is not mentioned in the quickspec.
- a E208i-p for system disk does not need a PCI-e slot as far as I can see
--> this means 2 slots are available, eg. additional 1x FC and 1x LAN adapter
There are 2 different PCI riser.
The one I recommend for VBR V11 is the 2x2, because it gives 2 PCI slots for each CPU. Please note that for <V11 we recommended to use only 1 CPU and the best riser was the 3x1.
The riser is located on the back side of the unit, and it is possible to extract it pulling two handles and without removing the heavy server from the rack.
The disk controller for the 2 SFF slots on the CPU blade, goes to an additional slot in the CPU blade itself, and it does not use the slots in the PCI riser.

The P816i is not supported on the 4510, I have asked to the Product Manager to be sure. It is available on the Apollo 4200.

In my lab, the P408i shows a bit less than 3GB/s max write throughput in RAID60 mode for Veeam backup workload (they are decimal GB = 10^9 mesured by Windows system monitor). The throughput on the product description is much higher, but, we know, we always need to consider the specific wokload and configuration. With 2 of the them throughput doubles to almost 6GB/s, and this is thanks to the increased efficiency of V11 on the Apollo HW. This physical-write throughput has a direct impact on backup throughput, which is roughfly 2x the physical write speed because of the 2:1 reduction effect in my data set. Please not that the Veeam GUI reports the speed as GB/s, but here they are the good binary GB = 2^30 (often described as GiB after the marketing folks caused the ambiguity we know).

I have tried to produce a useful benchmark. To show higher numbers, it is sufficient to choose a dataset with 4:1 or higher compression, but this would be misleading for the tech community as long as this is not the average data reduction.
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by skrause »

Is the throughput bottleneck with only one RAID card a limitation of that particular model of card?

Also, I am assuming that the need for "exactly 12 DIMMs" is to have everything dual channel? Or is that "all slots full"?

(We are not an HPE shop and am trying to spec out a server with similar CPU/RAM/RAID configuration)
Steve Krause
Veeam Certified Architect
SkyDiver79
Veeam ProPartner
Posts: 59
Liked: 40 times
Joined: Jan 08, 2013 4:26 pm
Full Name: Falk
Location: Germany
Contact:

Re: Veeam v11 - HPE Apollo 4510 test

Post by SkyDiver79 »

@Steve
"Also, I am assuming that the need for "exactly 12 DIMMs" is to have everything dual channel?
Not quite correct, Intel Scalable CPUs use 6 DIMMs per CPU (Six Channel) and AMD Ryzen 8 DIMMs per CPU (Eigth Channel CPU).

My experience with the HPE Smart Array controllers (also applies to Perc and MegaRaid):
Mostly the PCI bus limits the throughput, I get max 4 GB/s with all current controllers with only SSDs.
The P816i only has more I/Os no more Bandwith.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Ivan239 and 55 guests