Comprehensive data protection for all workloads
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben » 1 person likes this post

So, from a pure disk performance point of view, the present configuration with 4x striped RAID6 on 6 TB drives with 2x Intel S3500 SATA disks in RAID1 acting as Smart Cache, the results are as follows.

I used this "fio" profile for the test:

Code: Select all

[global]
bs=512k
numjobs=1
runtime=600
ioengine=windowsaio
iodepth=1
direct=1
overwrite=1

[forward1]
size=50g
rw=write
filename=D\:\vib1

[forward2]
size=50g
rw=write
filename=D\:\vib2

[forward3]
size=50g
rw=write
filename=D\:\vib3

[forward4]
size=50g
rw=write
filename=D\:\vib4

[transform1]
stonewall # block this workload until previous has completed
new_group # start new reporting group so numbers make sense
size=100g # 2x the size of the forward
rw=randrw
rwmixread=50
runtime=600
file_service_type=roundrobin
filename=D\:\vbk1
filename=D\:\old_vib1

[transform2]
size=100g # 2x the size of the forward
rw=randrw
rwmixread=50
runtime=600
file_service_type=roundrobin
filename=D\:\vbk2
filename=D\:\old_vib2

[transform3]
size=100g # 2x the size of the forward
rw=randrw
rwmixread=50
runtime=600
file_service_type=roundrobin
filename=D\:\vbk3
filename=D\:\old_vib3

[transform4]
size=100g # 2x the size of the forward
rw=randrw
rwmixread=50
runtime=600
file_service_type=roundrobin
filename=D\:\vbk4
filename=D\:\old_vib4
It simulates 4x backup jobs writing forward incremental simultaneously, and then the following transform, which is purely random read + random write.

Code: Select all

Run status group 0 (all jobs):
  WRITE: io=204800MB, aggrb=349902KB/s, minb=87475KB/s, maxb=87477KB/s, mint=599343msec, maxt=599353msec

Run status group 1 (all jobs):
   READ: io=87362MB, aggrb=149093KB/s, minb=36529KB/s, maxb=37852KB/s, mint=600007msec, maxt=600013msec
  WRITE: io=87344MB, aggrb=149063KB/s, minb=36556KB/s, maxb=37831KB/s, mint=600007msec, maxt=600013msec
That is close to 400 MB/s write performance, and around 150 MB/s for the transforms. Please remember those are raw storage I/O numbers, and there will of course be some slight overhead when Veeam is introduced with metadata updates etc.

In conclusion this means that it should be able to saturate the 10 GbE link reading from production storage, and write to disk at full capacity when native compression, deduplication and BitLooker is enabled. What more do you need, right? :)
jronnblom
Influencer
Posts: 17
Liked: 2 times
Joined: Oct 23, 2013 6:15 am
Full Name: Janåke Rönnblom
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by jronnblom »

poulpreben wrote:So, from a pure disk performance point of view, the present configuration with 4x striped RAID6 on 6 TB drives with 2x Intel S3500 SATA disks in RAID1 acting as Smart Cache, the results are as follows.
Could you run fio without the SSD and Smart Cache? It would be an interesting comparison.

-J
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

Sure, and as expected, the throughput for sequential write actually increased significantly here.

The 350-400 MB/s write in the previous test is capped by the 6 Gb SAS port divided by the two SSDs in RAID1. As they are acting as write-back cache, all writes are passed through those drives. In the perfect world, we would probably have split those two SSDs across both controller paths, instead of having them on the same backplane.

The transform also decreased slightly. In fact by almost 33%. This is probably due to the read block not getting cached anymore and thus not being able to instantly commit the block to disk using write-back.

Code: Select all

Run status group 0 (all jobs):
  WRITE: io=204800MB, aggrb=1281.2MB/s, minb=327966KB/s, maxb=327977KB/s, mint=159855msec, maxt=159860msec

Run status group 1 (all jobs):
   READ: io=60304MB, aggrb=114094KB/s, minb=28022KB/s, maxb=28985KB/s, mint=541178msec, maxt=541226msec
  WRITE: io=60381MB, aggrb=114239KB/s, minb=28077KB/s, maxb=29012KB/s, mint=541178msec, maxt=541226msec
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

poulpreben wrote:So, from a pure disk performance point of view, the present configuration with 4x striped RAID6 on 6 TB drives with 2x Intel S3500 SATA disks in RAID1 acting as Smart Cache, the results are as follows.
This is a RAID 6 with 4 x 6 TB right? Hm hm hm.... With our needed ~100-150 TB per node, we would need bigger raid groups. Do you think RAID60 is necessary at all?
Delo123
Veteran
Posts: 361
Liked: 109 times
Joined: Dec 28, 2012 5:20 pm
Full Name: Guido Meijers
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Delo123 »

Not on HP but Supermicro here, but we use 10x Raid 5 of 6 TB disks and stripe those into a windows storage pool per node. Gives us nearly a usable Petabyte after dedupe. Backups over 1GB/s
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

pirx wrote: This is a RAID 6 with 4 x 6 TB right? Hm hm hm.... With our needed ~100-150 TB per node, we would need bigger raid groups. Do you think RAID60 is necessary at all?
Well, even with 28x 6 TB and 2x RAID 6 with 14 disks in each, you would only get 134 TB usable. Then you have no hot-spares, no dedicated OS drives and no Smart Cache. Your rebuild times would also be through the roof, and I would really not recommend you going down that rabbit hole.

I think only 24 of the 28 drives are connected to the P84x controller, so I have inserted the numbers into my calculator. Maybe that helps you making a decision. I would personally go with maximum 8 drives in each RAID6.

Image
jronnblom
Influencer
Posts: 17
Liked: 2 times
Joined: Oct 23, 2013 6:15 am
Full Name: Janåke Rönnblom
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by jronnblom »

poulpreben wrote:Sure, and as expected, the throughput for sequential write actually increased significantly here.

The transform also decreased slightly. In fact by almost 33%. This is probably due to the read block not getting cached anymore and thus not being able to instantly commit the block to disk using write-back.
How are the disks configured on this machine?

How big are the SSDs?

What would happen to the performance when the SSDs are full?

Is the total backup and transform time shorter with/without the SSDs? Would of course depend on the amount of transfered/changed data.

Very interesting.

-J
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben » 1 person likes this post

jronnblom wrote: How are the disks configured on this machine?
http://i.imgur.com/z4qD3aN.png
jronnblom wrote:How big are the SSDs?
Intel S3500, 480 GB
jronnblom wrote:What would happen to the performance when the SSDs are full?
It's a simple FIFO cache. Cold blocks get purged from it.
jronnblom wrote:Is the total backup and transform time shorter with/without the SSDs? Would of course depend on the amount of transfered/changed data.
You answered that one yourself ;)
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

The 4510 supports up to 48 LFF drives with up to 8 TB. I think a 4200 with either 24 x LFF (2-8 TB) or 48 x SFF with max 2TB will not be enough here.

BTW: where is the benefit with a 4 disk RAID 6 compared to a RAID 10?
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

pirx wrote:BTW: where is the benefit with a 4 disk RAID 6 compared to a RAID 10?
Absolutely nothing. I only have it there, as I can switch the calculator to RAID5 mode.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by dellock6 »

One quick suggestion also to improve performance: S3500 are designed with more read performance in mind, but for writes you can also look at S3700. Also endurance is really different, with the 3700 offering way higher values (thus better lifetime when used heavily with writes).
Price on the other side is almost double for the 3700, as expected with the difference in performance.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

S3500 is just the (cheap) SATA version of S3700 ;-)
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by dellock6 »

Not sure on this one, otherwise performance would be different because of the different bus in use, but endurance for example would be the same... ;)
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
bg.ranken
Expert
Posts: 127
Liked: 22 times
Joined: Feb 18, 2015 8:13 pm
Full Name: Randall Kender
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by bg.ranken »

poulpreben, can I ask how you would approach this when trying to use a scale-out repository for all the volumes on this server? Since RAID6 has space increase over RAID 10 at the cost of performance, would it be wise to mix things up a bit? For the 24 drive unit, I was thinking 6 drives in RAID10 with the scaleout extent set to do incrementals to it and then 8 drives in RAID 6 set for only fulls, and then you have 2 drives as spares. That gives 18TB for incrementals and two 36TB volumes for fulls for 90TBs total space. This would work for anyone with enterprise licensing (since you can only have 3 extents) and would give enough performance for incrementals that it should be fine.

As long as all the backups are duplicated to another device with the backup copy jobs, does this seem like a viable configuration? Or would it be better to do RAID6 for incrementals and RAID10 for fulls? It seems like having the incrementals on a RAID10 would be faster for backups so the servers do not have snapshots for too long.
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

my 2 cents.
Prebens preformance findings show that in most cases the apollo will not be the bootleneck here. So I would keep the configuration simple as possible.
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

Exactly - what Andy said! Thanks :)
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

It's very hard for me to get a feeling what we really need for Veeam as primary backup target and how we might be limited then (reverse inc etc). I learned that a StoreOnce 6500 might not be the best idea. On the other hand servers with DAS / cheap disks in RAID 6 might be sufficient. As usual getting a demo system with the desired configuration and testing it with real world workloads is not always possible. For our current backup environment (not Veeam and with classic agent backups) we used a couple of HP EVA's as disk pools - but they were simply not fast enough. Now we are using 2 all flash 3PAR's just for our B2D which is then written to tape and deleted afterwards (so no real B2D).

It would be very nice to have proxy/repository with DAS as SOBR compared to a additional SAN device.
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

As I said before we had no issues with writing a 1Gbps backup read stream after deduplication an compression to the apollo. Prebens tests show that the Apollo was far away from it´s maximum throughput.

Multiple Apollos in a SoBR environment will help to boost performance in a scale out way.

My tests with another Storage System showed that a 48 disk configuration Raid60 (4+2) was fast enough to handle 4GBps veeam backup read stream.
To get such a random read stream (CBT) out of a source storage system is only possible with SSD only VMware storage systems in most cases. (not backup target)

So potentially you will not run into trouble with the Apollo.

It really depends on your Environment that you want to backup. Maybe you can describe a little bit more your environment. How many data and how many VMs. Daily change rate,...

StoreOnce/Datadomain and other Dedup devices are not prefered as primary backup targets as their deduplication engines have by design a random read penalty when you extract the data. So Instant VM Recovery, File Level Recovery and Explorer based Recovery aren´t that fast and in some cases not usable with it. These systems job is to deduplicate as high as possible, so they are good candidates for storing your long term GFS chains on it as secondary backup. At best cases with Catalyst and DDboost integration to streamline syntetic file operations.

I have customer with 2000 VMs backing up to EVAs with good performance. It depends on the configuration. I had never ran into a customer that really needed a all flash system for backup targets as their source storage is mainly the bottleneck.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by dellock6 »

And as an additional node, you DO NOT want your production storage to be the bottleneck, as probably at the point the source will be 100% loaded in reading data, the production VMs will suffer from lack of I/O. One of the reason for example we introduced backup I/O control in v8.

More than pure performance, I'd look at different parameters like backup windows, or RPO. How frequently you want to run your backups? Would you be ok with backups running 4 hrs a day, or not? And from here understand what you need. It may be that the first upgrade to reach your business goals would be the production storage, or the network, even before the backup target. With the new SOBR technology is damn easy to add an additional target to improve performance if the first one is not enough.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

late reply.... We have now received several proposals from distributors for a Veeam backup environment (both hardware and software, 156 Sockets, ~200 TB). We explicitly asked them if they think multiple Apollo Proxy/Repository servers with local disk would be an alternative as primary backup target instead of SAN (and SAN storage as secondary target). None of them included Apollo's in their offer. Instead they included MSA 2040 arrays + DL380G9 server. The throughput of the Apollo's would not be sufficient for what we need (or asked, I think we told them we need 1 GB/s) and they can not guarantee that the backups will finish in the backup window. Not sure if this is just because they don't want to sell Apollos or that 1 GB/s is indeed not possible (I doubt this).
poulpreben
Certified Trainer
Posts: 1025
Liked: 448 times
Joined: Jul 23, 2012 8:16 am
Full Name: Preben Berg
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by poulpreben »

1 GB/s write is quite a lot. What is your reasoning for this requirement?

200 TB with 10 % daily change rate, 2x data reduction and an 8 hour backup windows requires only ~3 Gbit/s => 375 MB/s.
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

It depends what you had defined. 1GB/s at Active Full Source Stream lead into a semi sequential stream at the apollo below 512MB/s. An Incremental run of 1GB/s at source is comparable with a 5GB/s Active Full and lead into a random read write of 300-400MB/s at target. If you defined it the other way at target speed it will increase ressources at min 2-3x depending on the definition.

If I remember right Apollo Servers are sold through the HPE Server team and MSAs are sold through the HPE Storage Team. Both uses pretty the same components. Basically a Apollo is a DL380 in a bigger chassis with more internal disks.
Maybe you get more discount at HPE storage team for a specific amount of disks compared with the server team. As well the partners have their standard concepts where they are sure which server and storage achived what throughput, while Apollos are not so common in daily usage (Backup Targets are Storage Fokused and you speak with HP Storage Team).

Based on our tests Apollo can handle for sure a 1GB/s source stream at active full (which we used for testing), see above messages.
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

poulpreben wrote:1 GB/s write is quite a lot. What is your reasoning for this requirement?

200 TB with 10 % daily change rate, 2x data reduction and an 8 hour backup windows requires only ~3 Gbit/s => 375 MB/s.
The complete backup volume is 550-600 TB, Veeam would only backup VM's, a different application databases and physical servers. I think we only included one number for the whole request and not separate numbers for the different tools.
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

Oh and I forgot... If you look at the 1GB/s on VMware side (Veeam Active Full) and use 4 Apollos, because of deduplication and compression you need (/2 /4) 125MB/s per Apollo System.
If you had definied it by 1GB/s at one Apollo this is pretty tough as we operate on a 512KB block level.
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by pirx »

That was my argument too and I'm still waiting on a phone call to discuss this with them.
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

Maybe you can send me the requirement documentation by mail so that I can look at it.
ivordillen
Enthusiast
Posts: 62
Liked: never
Joined: Nov 03, 2011 2:55 pm
Full Name: Ivor Dillen
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by ivordillen »

pirx,
MSA in the proposal was that with SSD for caching? I have a MSA2040 and it writes fast 500-800MB/s but when I do reversed incremental the speed drops to 20-30MB/s so random IOPS are not good.

Would adding 4x1.6TB SSD to the read/write cache be a good solution?

Does anybody has had this config/issue?

kind regards
Ivor
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

I only tested Apollos with enabled smart cache (2 mirrored SSDs) as write back cache. This configuration performed well (see above feedback).
20-30MB/s even without smart cache is too less performance. Please check above configuration details and our Apollo configuration whitepaper:
https://www.veeam.com/wp-hpe-apollo-ser ... rings.html
The Apollo WP is maybe built for another focus, but settings are the same for your environment.
Rettep91
Novice
Posts: 8
Liked: never
Joined: Nov 16, 2017 1:44 pm
Full Name: Petter Roness Madshus
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Rettep91 »

Ivordillen,

Did you get any answer to this?

We are seeing similar problems when using reverse incremental.

Regards
Petter
Andreas Neufert
VP, Product Management
Posts: 7077
Liked: 1510 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Veeam Backup Server HP Apollo 4200

Post by Andreas Neufert »

Hi Peter,

I would say, please check with HPE the configuration.

You can copy as well some files and see how fast it performs there. To see if the backend or Veeam itself is the bottleneck. Then you can work with HPE or Veeam on it.
Post Reply

Who is online

Users browsing this forum: efd121, veremin and 150 guests