-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
HPE Apollo 4150 Server
We purchased a new HPE Apollo 4150 setup. We have 40x 6Tb drives. I also have 6x 480Gb SSD High Speed Drives for SmartCache.
I read over this apollo deployment guide. https://www.veeam.com/wp-hpe-apollo-ser ... guide.html
I was wondering if anyone else has any experience with these. I plan to create 2 RAID-60 Logical Drives of 20x drives each. 18 really as 2 of those drives in each logical group will be hot spares. Then I will enable smart cache using RAID-5 and split the total available cache drive space between the 2 logical drives.
Does that sound reasonable? My main question is how do you all recommend that I partition those out in Windows 2012 R2? Should I just create 2 very large partitions to match the 2 logic drives or should I create multiple partitions and then multiple backup repositories to add to my scale-out repository in Veeam?
I just want to make sure I have enough performance to run many jobs at once without overloading the disks.
I read over this apollo deployment guide. https://www.veeam.com/wp-hpe-apollo-ser ... guide.html
I was wondering if anyone else has any experience with these. I plan to create 2 RAID-60 Logical Drives of 20x drives each. 18 really as 2 of those drives in each logical group will be hot spares. Then I will enable smart cache using RAID-5 and split the total available cache drive space between the 2 logical drives.
Does that sound reasonable? My main question is how do you all recommend that I partition those out in Windows 2012 R2? Should I just create 2 very large partitions to match the 2 logic drives or should I create multiple partitions and then multiple backup repositories to add to my scale-out repository in Veeam?
I just want to make sure I have enough performance to run many jobs at once without overloading the disks.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: HPE Apollo 4150 Server
Hi Alan,
if you want the best performance I'd go for one repository for each volume, and avoid any additional split. As we are writing a new paper about ReFS configurations, let me reuse a part of the text I wrote on this topic:
We suggest to use the available space in a single volume; if a server is configured with multiple repositories, one Veeam datamover per repository will be started, with its amount of consumed CPU and Memory. So, any calculation about server sizing needs to be adjusted, and performance is decreased as a datamover has to share CPU and memory of the same physical server among its siblings. By having one single datamover per physical server, performance and hardware usage are optimized.
In your case, it would be two volumes, but the general suggestion here is to go for large volumes, and have as few as possible.
if you want the best performance I'd go for one repository for each volume, and avoid any additional split. As we are writing a new paper about ReFS configurations, let me reuse a part of the text I wrote on this topic:
We suggest to use the available space in a single volume; if a server is configured with multiple repositories, one Veeam datamover per repository will be started, with its amount of consumed CPU and Memory. So, any calculation about server sizing needs to be adjusted, and performance is decreased as a datamover has to share CPU and memory of the same physical server among its siblings. By having one single datamover per physical server, performance and hardware usage are optimized.
In your case, it would be two volumes, but the general suggestion here is to go for large volumes, and have as few as possible.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: HPE Apollo 4150 Server
For anyone considering replacing their backup server here are the specs we settled on and how things are working for us.
This is a 4U server and with this configuration I get about 150Gb of storage space. My previous configuration was a HPE DL380 G8 with 10 external disk arrays attached as well as 2 older NetApp FAS-2240 units FC attached. It took up an entire rack. The new unit doesn't have quite as much storage but it is pretty close.
HPE Apollo 4510 w/ 1 ProLiant XL450 Gen9 Blade Server
Windows 2012 R2 w/ Deduplication tuned on for repositories.
Dual 8 Core CPUs
128Gb Memory
Smart Array P840 w/HP Smart Cache
2x Dual Port FC Cards
40x 6Tb SAS Drives (Repositories)
6x 480Gb SSD Drives (Disk Cache)
2x 480Gb SSD Drives (O/S)
3x 300Gb SAS Drives (Replication Repository)
I setup my RAID into 2 logical drives for my repositories. Each logical drive has 18x 6Tb drives in RAID-60 and another 2 for hot spares.
I have 6 SSD drives assigned as smart cache for both of these drives. Using RAID-5 smart cache I have about a terrabyte of cache for each logical drive.
I used 2 SSD drives for the O/S (RAID-1)and also had some extra 300Gb drives (RAID-1) where I store replication data.
My overal setup contains this server as the repository server for my HQ datacenter. This server acts only as a tape controller, storage and proxy.
The server is connected via 16Gb brocade and 16Gb FC cards to my tape library which has 4x FC attached drives. Those drives have a max connection speed of 8Gb over FC however.
The server has a second Dual port 8Gb FC card connected via brocade back to my production SAN. I use a VM in our DR datacenter to control everything. That VM has all of the jobs using the physical servers as the proxies and is replicated back to HQ nightly.
I have a DL380 backup server setup in DR as well with the same FC connectivity. I know it is overkill but this is required by our owner. We run backup jobs in HQ nightly which then kicks off a replication job to DR once complete. The replication job, in turn, kicks off a backup job in DR to backup the replica there. Then I run tape jobs in both locations to get data offsite each day. Yes you read that right. We backup the same servers in 2 location and tape out in both locations. We also replicate some VMs that are live in DR back to HQ each night and back those up. We backup and replicate over 200 VMs this way. Connectivity between the 2 datacenters is via a Gigabit WAN link.
Overall I am very satisfied with the Apollo. I had a bad 6Tb drive the first day I built the machine. I then performed a cutover to start using the new server and had a second bad 6Tb drive which brought the entire thing to it's knees. I had to fail back to my old server after that which was a major pain. After working with HP it is was determined that the RAID controller was malfunctioning. I replaced that and the bad drive and re-setup the entire box.
We performed our cut over again and I have to say this thing is pretty darn fast. The majority of our jobs run as daily incremental with weekly active full. We do have some servers that are quite large (one is around 10Tb) so on those we run weekly synthetic full backups (no reverse incremental).
I did a backup of one of our very large file servers. On the old repository that backup took almost 24 hours. It completed in 7 hours with the new setup.
This is a 4U server and with this configuration I get about 150Gb of storage space. My previous configuration was a HPE DL380 G8 with 10 external disk arrays attached as well as 2 older NetApp FAS-2240 units FC attached. It took up an entire rack. The new unit doesn't have quite as much storage but it is pretty close.
HPE Apollo 4510 w/ 1 ProLiant XL450 Gen9 Blade Server
Windows 2012 R2 w/ Deduplication tuned on for repositories.
Dual 8 Core CPUs
128Gb Memory
Smart Array P840 w/HP Smart Cache
2x Dual Port FC Cards
40x 6Tb SAS Drives (Repositories)
6x 480Gb SSD Drives (Disk Cache)
2x 480Gb SSD Drives (O/S)
3x 300Gb SAS Drives (Replication Repository)
I setup my RAID into 2 logical drives for my repositories. Each logical drive has 18x 6Tb drives in RAID-60 and another 2 for hot spares.
I have 6 SSD drives assigned as smart cache for both of these drives. Using RAID-5 smart cache I have about a terrabyte of cache for each logical drive.
I used 2 SSD drives for the O/S (RAID-1)and also had some extra 300Gb drives (RAID-1) where I store replication data.
My overal setup contains this server as the repository server for my HQ datacenter. This server acts only as a tape controller, storage and proxy.
The server is connected via 16Gb brocade and 16Gb FC cards to my tape library which has 4x FC attached drives. Those drives have a max connection speed of 8Gb over FC however.
The server has a second Dual port 8Gb FC card connected via brocade back to my production SAN. I use a VM in our DR datacenter to control everything. That VM has all of the jobs using the physical servers as the proxies and is replicated back to HQ nightly.
I have a DL380 backup server setup in DR as well with the same FC connectivity. I know it is overkill but this is required by our owner. We run backup jobs in HQ nightly which then kicks off a replication job to DR once complete. The replication job, in turn, kicks off a backup job in DR to backup the replica there. Then I run tape jobs in both locations to get data offsite each day. Yes you read that right. We backup the same servers in 2 location and tape out in both locations. We also replicate some VMs that are live in DR back to HQ each night and back those up. We backup and replicate over 200 VMs this way. Connectivity between the 2 datacenters is via a Gigabit WAN link.
Overall I am very satisfied with the Apollo. I had a bad 6Tb drive the first day I built the machine. I then performed a cutover to start using the new server and had a second bad 6Tb drive which brought the entire thing to it's knees. I had to fail back to my old server after that which was a major pain. After working with HP it is was determined that the RAID controller was malfunctioning. I replaced that and the bad drive and re-setup the entire box.
We performed our cut over again and I have to say this thing is pretty darn fast. The majority of our jobs run as daily incremental with weekly active full. We do have some servers that are quite large (one is around 10Tb) so on those we run weekly synthetic full backups (no reverse incremental).
I did a backup of one of our very large file servers. On the old repository that backup took almost 24 hours. It completed in 7 hours with the new setup.
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: HPE Apollo 4150 Server
A quick follow up on this topic as I found an issue with my configuration.
Since I am using Windows 2012 R2 there are limitations with VSS and Deduplication. Max partition sizes are 64Tb for those.
I do store some file data that gets copied to this server each night and I do file to tape jobs for those. They won't run because VSS won't work.
I decided to migrate the data from one volume to the other and break down the single drive into 2 smaller drives for each RAID group. So I will end up with 4 drives in Windows each about 40Tb.
Since I am using Windows 2012 R2 there are limitations with VSS and Deduplication. Max partition sizes are 64Tb for those.
I do store some file data that gets copied to this server each night and I do file to tape jobs for those. They won't run because VSS won't work.
I decided to migrate the data from one volume to the other and break down the single drive into 2 smaller drives for each RAID group. So I will end up with 4 drives in Windows each about 40Tb.
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: HPE Apollo 4150 Server
Regarding SmartCache; does anyone know if Veeam can benifit from Write-back caching?
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: HPE Apollo 4150 Server
Since Veeam works at the OS layer, any underlying improvement to the file system performance are immediately used by our software. I don't have specific information about SmartCache yet, I'm working with some HPE guys right these days on Apollo tests, but we will not have results for some weeks, and now that you said, it could be indeed a good idea to test with and without the cache. But I've seen for example linux repositories using software solutions like EnhanceIO and you indeed see the increase in performance.
I'd just verify if the cache has power loss protection, I've seen some linux solutions in the past that don't have it and in that case I'd vote against the solution, if they have it, I'd give it a try for sure.
I'd just verify if the cache has power loss protection, I've seen some linux solutions in the past that don't have it and in that case I'd vote against the solution, if they have it, I'd give it a try for sure.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: HPE Apollo 4150 Server
If there is something I can execute on my Apollo to help gather statistics let me know. I'd be happy to help. I have Veeam One if there are any reports I could supply.
Just a follow up since implementing our setup. We have about 84 Tb in Full backups and another 43 Tb in Incremental backups.
I run my jobs as Create Synthetic Full once per week with reverse incremental. I know this has higher I/O but I need to conserve space so I don't run active fulls.
I have 52 jobs running for 193 virtual machines to this Apollo. I stagger my weekly fulls out over several days. They start on Thursday night and run through Sunday night. Some care over into Monday.
Tape outs usually take a bit longer running through Tuesday night to get everything offsite.
Just a follow up since implementing our setup. We have about 84 Tb in Full backups and another 43 Tb in Incremental backups.
I run my jobs as Create Synthetic Full once per week with reverse incremental. I know this has higher I/O but I need to conserve space so I don't run active fulls.
I have 52 jobs running for 193 virtual machines to this Apollo. I stagger my weekly fulls out over several days. They start on Thursday night and run through Sunday night. Some care over into Monday.
Tape outs usually take a bit longer running through Tuesday night to get everything offsite.
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: HPE Apollo 4150 Server
It would be interesting to see if SmartCache can speed up Synthetic Full or Transform operations. I'm not sure if these benefit from a read cache in any way.
-
- Veeam Software
- Posts: 98
- Liked: 23 times
- Joined: Oct 03, 2017 12:41 pm
- Full Name: Mark Polin
- Contact:
Re: HPE Apollo 4150 Server
I know this is an older post, but I get asked about this frequently. I recommend you check out the following HPE Apollo+Veeam Reference Architecture document. http://h20195.www2.hpe.com/V2/GetDocume ... 0000150enw
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: HPE Apollo 4150 Server
Good info to have. I was looking back at this post and realized I didn't respond about Smart Cache.
We realized about a year into this thing that the SSD drives we purchased were not in fact High Write drives. They all reached their maximum wear in a bit over a year and I replaced them all.
During the wait for the new drives, I had disabled Smart Cache to prevent it from failing unexpectedly. I went several weeks without them and I absolutely noticed a difference.
Backups of all types were slower. So the answer is Yes, Smart Cache does help. Considerably.
We realized about a year into this thing that the SSD drives we purchased were not in fact High Write drives. They all reached their maximum wear in a bit over a year and I replaced them all.
During the wait for the new drives, I had disabled Smart Cache to prevent it from failing unexpectedly. I went several weeks without them and I absolutely noticed a difference.
Backups of all types were slower. So the answer is Yes, Smart Cache does help. Considerably.
-
- Enthusiast
- Posts: 44
- Liked: 5 times
- Joined: Apr 09, 2015 8:33 pm
- Full Name: Simon Chan
- Contact:
Re: HPE Apollo 4150 Server
Interesting post.
We are starting to look at using the HPE Apollo 4200 for Veeam. I've read the architecture document and it looks very impressive.
My question is, how does one go about setting/creating a 576TB volume within Windows as seen in the document? Someone above mentioned Windows has a 64TB limitation. I've personally haven't created anything larger than 50TB as we would just pool them together into a SOBR.
But the Apollo guide recommends using one large volume instead of multiple.
Also, are you folks installing some sort of hypervisor on top of the Apollo server or running Windows/Veeam directly on it?
We are starting to look at using the HPE Apollo 4200 for Veeam. I've read the architecture document and it looks very impressive.
My question is, how does one go about setting/creating a 576TB volume within Windows as seen in the document? Someone above mentioned Windows has a 64TB limitation. I've personally haven't created anything larger than 50TB as we would just pool them together into a SOBR.
But the Apollo guide recommends using one large volume instead of multiple.
Also, are you folks installing some sort of hypervisor on top of the Apollo server or running Windows/Veeam directly on it?
-
- Veteran
- Posts: 257
- Liked: 40 times
- Joined: May 21, 2013 9:08 pm
- Full Name: Alan Wells
- Contact:
Re: HPE Apollo 4150 Server
My partition size limitations were due to Windows 2012 and NTFS. Look into Windows 2016 or 2019 and Refs to get the best performance.
-
- Enthusiast
- Posts: 44
- Liked: 5 times
- Joined: Apr 09, 2015 8:33 pm
- Full Name: Simon Chan
- Contact:
Re: HPE Apollo 4150 Server
ahh gotcha..looks like ReFS under 2019 supports a much, much larger partition size.
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: HPE Apollo 4150 Server
VSS is limited to 64TB, so keep that in mind if you plan to backup files from those volumes via Agent or File2Tape.
Also SmartCache is limited to a maximum logical volume size, but I'm not sure how much that was.
Also SmartCache is limited to a maximum logical volume size, but I'm not sure how much that was.
-
- Service Provider
- Posts: 28
- Liked: 5 times
- Joined: Apr 26, 2011 7:36 am
- Full Name: Stefan Brun | Streamline AG
- Location: Switzerland
- Contact:
Re: HPE Apollo 4150 Server
we also started to look at using the HPE Apollo 4200 Gen10 for Veeam as a replacement for our ML350 G9.
I see in this Dokument Page 3 https://h20195.www2.hpe.com/v2/GetDocum ... c=ch&lc=de 2 numbers wich are not identical.Also SmartCache is limited to a maximum logical volume size, but I'm not sure how much that was.
andKey Features: 1 TB maximum SSD SmartCache capacity per controller
Has someone proven that using the SmartCache is beneficial for the backup jobs?SmartCache Line Size
There is a choice of SmartCache cache line size. Please use this table to determine the optimal
SmartCache line size. The line size increases the total amount of SmartCache and also increases the
amount of space a single I/O consumes (i.e. the # of cache lines does not grow).
SmartCache
capacity
64 KiB (default) 1.7 TB
128 KiB 3.4 TB
256 KiB 6.8 TB
-
- Enthusiast
- Posts: 51
- Liked: 3 times
- Joined: May 07, 2019 12:22 am
- Full Name: Glenn
- Contact:
Re: HPE Apollo 4150 Server
I have got two maxed out (from the disk perspective) HPE Apollo 4200 servers coming soon for use as Veeam backup targets.
Refer to the official HPE Reference Architecture for using HPE Apollo's as Veeam backup targets.
https://www.veeam.com/wp-reference-arch ... arget.html
Refer to the official HPE Reference Architecture for using HPE Apollo's as Veeam backup targets.
https://www.veeam.com/wp-reference-arch ... arget.html
-
- Service Provider
- Posts: 28
- Liked: 5 times
- Joined: Apr 26, 2011 7:36 am
- Full Name: Stefan Brun | Streamline AG
- Location: Switzerland
- Contact:
Re: HPE Apollo 4150 Server
Hey AuGL did you receive your maxed out Apollo Servers?
Who is online
Users browsing this forum: Bing [Bot] and 81 guests