HPE Apollo 4150 Server

Availability for the Always-On Enterprise

HPE Apollo 4150 Server

Veeam Logoby nunciate » Mon Aug 21, 2017 8:29 pm

We purchased a new HPE Apollo 4150 setup. We have 40x 6Tb drives. I also have 6x 480Gb SSD High Speed Drives for SmartCache.
I read over this apollo deployment guide. https://www.veeam.com/wp-hpe-apollo-ser ... guide.html

I was wondering if anyone else has any experience with these. I plan to create 2 RAID-60 Logical Drives of 20x drives each. 18 really as 2 of those drives in each logical group will be hot spares. Then I will enable smart cache using RAID-5 and split the total available cache drive space between the 2 logical drives.

Does that sound reasonable? My main question is how do you all recommend that I partition those out in Windows 2012 R2? Should I just create 2 very large partitions to match the 2 logic drives or should I create multiple partitions and then multiple backup repositories to add to my scale-out repository in Veeam?

I just want to make sure I have enough performance to run many jobs at once without overloading the disks.
nunciate
Expert
 
Posts: 139
Liked: 23 times
Joined: Tue May 21, 2013 9:08 pm
Full Name: Alan Wells

Re: HPE Apollo 4150 Server

Veeam Logoby dellock6 » Mon Aug 21, 2017 10:13 pm

Hi Alan,
if you want the best performance I'd go for one repository for each volume, and avoid any additional split. As we are writing a new paper about ReFS configurations, let me reuse a part of the text I wrote on this topic:

We suggest to use the available space in a single volume; if a server is configured with multiple repositories, one Veeam datamover per repository will be started, with its amount of consumed CPU and Memory. So, any calculation about server sizing needs to be adjusted, and performance is decreased as a datamover has to share CPU and memory of the same physical server among its siblings. By having one single datamover per physical server, performance and hardware usage are optimized.

In your case, it would be two volumes, but the general suggestion here is to go for large volumes, and have as few as possible.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 5061
Liked: 1342 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: HPE Apollo 4150 Server

Veeam Logoby nunciate » Fri Sep 08, 2017 1:14 pm 1 person likes this post

For anyone considering replacing their backup server here are the specs we settled on and how things are working for us.
This is a 4U server and with this configuration I get about 150Gb of storage space. My previous configuration was a HPE DL380 G8 with 10 external disk arrays attached as well as 2 older NetApp FAS-2240 units FC attached. It took up an entire rack. The new unit doesn't have quite as much storage but it is pretty close.

HPE Apollo 4510 w/ 1 ProLiant XL450 Gen9 Blade Server
Windows 2012 R2 w/ Deduplication tuned on for repositories.
Dual 8 Core CPUs
128Gb Memory
Smart Array P840 w/HP Smart Cache
2x Dual Port FC Cards
40x 6Tb SAS Drives (Repositories)
6x 480Gb SSD Drives (Disk Cache)
2x 480Gb SSD Drives (O/S)
3x 300Gb SAS Drives (Replication Repository)

I setup my RAID into 2 logical drives for my repositories. Each logical drive has 18x 6Tb drives in RAID-60 and another 2 for hot spares.
I have 6 SSD drives assigned as smart cache for both of these drives. Using RAID-5 smart cache I have about a terrabyte of cache for each logical drive.
I used 2 SSD drives for the O/S (RAID-1)and also had some extra 300Gb drives (RAID-1) where I store replication data.

My overal setup contains this server as the repository server for my HQ datacenter. This server acts only as a tape controller, storage and proxy.
The server is connected via 16Gb brocade and 16Gb FC cards to my tape library which has 4x FC attached drives. Those drives have a max connection speed of 8Gb over FC however.
The server has a second Dual port 8Gb FC card connected via brocade back to my production SAN. I use a VM in our DR datacenter to control everything. That VM has all of the jobs using the physical servers as the proxies and is replicated back to HQ nightly.

I have a DL380 backup server setup in DR as well with the same FC connectivity. I know it is overkill but this is required by our owner. We run backup jobs in HQ nightly which then kicks off a replication job to DR once complete. The replication job, in turn, kicks off a backup job in DR to backup the replica there. Then I run tape jobs in both locations to get data offsite each day. Yes you read that right. We backup the same servers in 2 location and tape out in both locations. We also replicate some VMs that are live in DR back to HQ each night and back those up. We backup and replicate over 200 VMs this way. Connectivity between the 2 datacenters is via a Gigabit WAN link.

Overall I am very satisfied with the Apollo. I had a bad 6Tb drive the first day I built the machine. I then performed a cutover to start using the new server and had a second bad 6Tb drive which brought the entire thing to it's knees. I had to fail back to my old server after that which was a major pain. After working with HP it is was determined that the RAID controller was malfunctioning. I replaced that and the bad drive and re-setup the entire box.

We performed our cut over again and I have to say this thing is pretty darn fast. The majority of our jobs run as daily incremental with weekly active full. We do have some servers that are quite large (one is around 10Tb) so on those we run weekly synthetic full backups (no reverse incremental).

I did a backup of one of our very large file servers. On the old repository that backup took almost 24 hours. It completed in 7 hours with the new setup.
nunciate
Expert
 
Posts: 139
Liked: 23 times
Joined: Tue May 21, 2013 9:08 pm
Full Name: Alan Wells

Re: HPE Apollo 4150 Server

Veeam Logoby nunciate » Tue Sep 12, 2017 8:54 pm

A quick follow up on this topic as I found an issue with my configuration.

Since I am using Windows 2012 R2 there are limitations with VSS and Deduplication. Max partition sizes are 64Tb for those.
I do store some file data that gets copied to this server each night and I do file to tape jobs for those. They won't run because VSS won't work.

I decided to migrate the data from one volume to the other and break down the single drive into 2 smaller drives for each RAID group. So I will end up with 4 drives in Windows each about 40Tb.
nunciate
Expert
 
Posts: 139
Liked: 23 times
Joined: Tue May 21, 2013 9:08 pm
Full Name: Alan Wells


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: tdewin, the_mentor, tommyo and 47 guests