Comprehensive data protection for all workloads
Post Reply
gingerdazza
Expert
Posts: 191
Liked: 14 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

Real world restore speeds using HPE StoreOnce 3540

Post by gingerdazza »

I know of many people's historical issues with StoreOnce restore speeds (and other dedupe appliances). We're looking at the HPE StoreOnce 3540, and wondered....

1. Has anyone got any real world MB/s restore throughputs that they are achieving out there on Storeonce? Doesn't have to be specifically 3540
2. Does the HP catalyst integration on assist backups, or does it also assist restores (and if so how?)

I have tried a Storeonce VSA and was getting 98MB/s on a restore from that.

Thanks
ChrisSnell
Technology Partner
Posts: 126
Liked: 18 times
Joined: Feb 28, 2011 5:20 pm
Full Name: Chris Snell
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by ChrisSnell » 1 person likes this post

There is one dedupe appliance which is configured to give fast restore speeds. The ExaGrid architecture uses a Landing Zone (none-dedupe disk) to land the backups on and store for a week, providing rapid recovery (and supporting vPower features).

Take a look: http://www.exagrid.com/why-exagrid/solu ... ironments/
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by foggy » 1 person likes this post

StoreOnce restore performance was significantly increased in StoreOnce firmware 3.15.1 and newer, so make sure you're running the recent one.
HPEStorageGuy
Technology Partner
Posts: 3
Liked: 6 times
Joined: Nov 14, 2014 10:59 pm
Full Name: Calvin Zito
Location: Boise, ID
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by HPEStorageGuy » 1 person likes this post

Recovery depends not just on the performance of StoreOnce, but other factors such as fabric, restore target performance, media servers, hardware configuration of the appliance. VSA – the recovery performance is going to depend on other factors as well; underlying hardware of the VM host, the vdisk storage and hypervisor.

Something to consider is these appliances are highly parallelized : in general more streams = more performance.

Obviously we develop the product, so our data is based predominantly on lab environments (and customer feedback), but single stream performance of ~140MB-160MBs for restore on the smaller appliances is certainly achievable, even in the real world.

It’s really important to get the streams up (in and out) by using Per-VM backup files – which if you use the StoreOnce catalyst integration it will auto default to. This produces a stream per VM for backup and recovery. Pretty good explanation here: https://helpcenter.veeam.com/docs/backu ... tml?ver=95

Catalyst accelerates the backups by hashing the data before it has reached the appliance. There is no (major) benefit to restore performance other than Catalyst support FC, whereas NAS is obviously Ethernet only. As significant restores usually require moving a stream of lots of data, being able to keep that off your LAN and use generally higher performance 16Gb FC infrastructure is better all round.

If you’d like to speak to some StoreOnce & Veeam customers, or need any help, please let us know.
gingerdazza
Expert
Posts: 191
Liked: 14 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by gingerdazza »

My natural thoughts are that a two stage strategy is a healthy way forward... disk backup for recent backups/fast recovery, and dedupe for longer retention requirements (GFS). This might be (on paper) where Exagrid appeals with its landing zone - Single box appliance that can theoretically achieve both fast near-term recovery, and storage efficient archive.

I have had no dealings with Exagrid before, but have to say that HPE are a very frustrating company to work with. As a customer I've oftentimes tried to engage them and get simple facts, and they generally fail every time. Very poor customer engagement experience. That's partially why we went from HP Blade + SAN to Nutanix. Nutanix just got us everything we needed quickly, with support that is second to none - nothing is too much trouble. HPE are/were a mish mash of bought-in technologies, no cohesive strategy, painful support and an utter lack of clear and helpful documentation online. Just lots of pretty ChalkTalks that (whilst useful) are not valuable at a technical level - heavily marketing based. And don't get me started on HP Data Protector :)

Sorry if the HPE comments sound too negative. I genuinely wish they'd restructure their entire customer engagement, support, and documentation as they could be an awesome force.
soylent
Enthusiast
Posts: 61
Liked: 7 times
Joined: Aug 01, 2012 8:33 pm
Full Name: Max
Location: Fort Lauderdale, Florida
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by soylent » 1 person likes this post

We're also looking to replace our ancient Data Domain 2500 with either an Exagrid or HPE StoreOnce. The Data Domain's restore speeds are awful, like an hour for a single VM, so the Exagrid's staging area looks really appealing
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by FedericoV » 6 people like this post

I have done a lot of test in my lab. Although my goal was not making benchmarks, but only preparing best practice guides for integrating StoreOnce with Veeam, I have measured realistic restore speed. When we talk about the speed we need to be as specific as possible to try preventing misunderstanding.

Lab details:
•DL360 Gen 9
•10GbE
•StoreOnce 5100 (12HD) v15.1,
•Catalyst
•Veeam 9.5.
•VM size ~200MB Restore

Measured throughput (single stream):
•269MB/s.
I have measured the throughput once the transfer has gone in steady state, i.e. after the initial ramp up.

Is it the maximum restore throughput for this HW configuration?
Honestly, I do not know. I cannot say where is the bottleneck and I cannot even exclude my primary storage ingestion rate could have influenced the result.

Is it the max restore throughput for a StoreOnce 5100?
By far, 269MB/s is not the max throughput. To scale up it is necessary to start more streams (i.e. to restore multiple VMs) and add more 10GbE connections.

Is 269MB/s a reasonably good number for a single VM restore?
There is not an easy answer; it depends on your requirements.
IMHO, this is an excellent throughput; it is possible to restore a 500GB VM in about 30 minutes.

Looking for lower RTO?
Why not! Use Veeam Instant VM Recovery. You boot up your VM on StoreOnce before starting the restore process, and then make a live storage vMotion to your primary storage.
Veeam supports StoreOnce for IVMR and the throughput is quite good despite the fact that data comes from a fully deduplicated storage (without the 60% capacity overhead of an non deduped landing zone and without using expensive SSDs).

Looking for an even lower RTO?
Storage snapshots are you best friends! HW Snapshots integrated with true backup are the secret weapon to win the backup war ;-)
Do not be scared, Veeam makes snapshots very easy to use and they are also very cost effective because, on good storage arrays, they use very little additional capacity and near to 0% processing overhead.
I have done tests with Nimble and 3PAR, but I know Veeam supports, at some extent, NetApp and VNX too.
An example is better than theory:
•Schedule a Veeam consistent snapshot every 4 hours
•Configure Veeam for making a daily "true" backup to StoreOnce and, important point, to preserve the snapshot too (Yes, Veeam with Nimble can make this!).
•Configure Veeam for keeping 2 days of snapshots, i.e. 12 snapshot for our example
At this point, what are the benefits?
1)You have 2 days of snapshot at 4 hours interval for super fast restore at full primary storage speed (from both primary or DR site if you have primary storage replication).
Usually it cost as much as 6% more capacity on good storage arrays.
2)You have your daily backup on StoreOnce for long retention and for the "good" protection than only a true backup can provide.
3)You DO NOT have additional processes moving data from any uncompressed and capacity hungry landing zone
4)You DO NOT have daily Veeam Backup-Copy Jobs moving data from local disks to the deduped StoreOnce volumes.
5)You get the full benefit of Source Side Deduplication (bandwidth reduction over the network)


Tuning Suggestion:
1.On your VM, never use the standard E1000 NIC, instead use the VMXNET3, it is really much faster with 10GbE.
2.Avoid any extra hop in the restore process. Make sure your Veeam Proxy and Gateway services run on the same VM
3.On HPE ProLiant servers, disable power management on BIOS, i.e. set "Static high performance mode". It speeds up a little the Backup Process.
4.Disable Veeam own Compression, Deduplication and Encryption.
5.Use Virtual Synthetic Full. With this feature, after the very first Full Backup, Veeam makes only incremental backup forever. There is no need of any resource intensive daily data transformation that occurs with Veeam Reverse-Incremental and with Incremental-Forever.
Simply, once a week, Veeam instructs StoreOnce to build an independent Full Backup. The key point is that StoreOnce makes the new full in a smart way, offloading Veeam from the task. StoreOnce is super fast on this task because it reorganizes pointers to deduped data instead of moving data, and... this is smart.
gingerdazza
Expert
Posts: 191
Liked: 14 times
Joined: Jul 23, 2013 9:14 am
Full Name: Dazza
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by gingerdazza » 1 person likes this post

Hi FredericoV

would be great if you could run a multi-VM stream test and post the results.
johna8
Influencer
Posts: 10
Liked: never
Joined: Oct 11, 2016 8:23 am
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by johna8 »

I'm keen to hear about the restore speeds for multi-VM stream as well.

We also have 3540 infrastructure but over 10GB SFP only not Catalyst over FC.

Is the 5100 lab over ethernet not FC I take it?
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by FedericoV »

Next week I'll be traveling, but when I come back I'll run a few restore speed tests for you in my lab.
If you have specific requests, feel free to ask... I'll make my best to answer.
I hope I there is a way to attach screenshots here :?: .

yes, my 5100 is on 10GbE
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by foggy »

Hi Federico, you can upload screenshots to any third-party hosting resource and embed links into your post. Thanks.
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by FedericoV »

I have done many restore tests using my StoreOnce 5100 in the last days and I have few numbers to share with you.

Test environment
  • Veeam 9.5 U3
  • vSphere 6.5 U2
  • Backup target: StoreOnce 5100 via Catalyst
  • Connectivity one 10GbE
  • 4 VMs, 50GB each (4 FS with 25GB of random data per VM)
  • Workload generator: based on 25GB per VM distributed on 11,000 files of different size and with random content.
  • 4 ESXi Servers, each one with its local Veeam Proxy
Backup process
  • I have done an initial Full, 6 incrementals and a final Full as Virtual Synthetic Full
    Before every backup, I have run on each VM a workload generator changing 3% of each VM dataset corresponding to 320 files.
Restore test 1:
How fast is the restore for 4 concurrent operations?
  • I wanted to see if the speed grows up with multiple concurrent streams.
    I restored all 4 VMs concurrently from the most recent backup, to the same DataStore achieving a total throughput of 650 MB/s.
    This is better than a single stream restore (above in this thread), but I expected something more. Maybe the single DataStore was a bottleneck. In the next test, I will remove this potential bottleneck.
Restore test 2:
How fast is the restore for 4 concurrent operations when I write to 4 different DataStores?
  • On this test, I restored again all 4 VMs concurrently from the most recent backup, but this time I wrote each VM to a different local DataStore on 4 different ESXi. Furthermore, each ESXi had its local Veeam proxy.
    This time the throughput was about 960MB/s. This proves that in the previous test the speed reduction has been caused by the DataStore and the ESXi server.
    I guess I have now reached another bottleneck, which is the single 10GbE connection to my StoreOnce. I guess a 10GbE link cannot transfer much more than 960MB/s
Appreciated discovery
  • What I appreciated in my last test was the ability of Veeam to automatically distribute the workload, choosing the local proxy/gateway for each VM. This kept the network traffic direct from each Veeam-Proxy to StoreOnce without any other additional hop in LAN for a proxy-to-proxy communication.
Is it possible to run faster?
This is what I want to discover too. In a new test, I want to add a second 10GbE connection to StoreOnce and other 4 VMs to see if the throughput keeps growing.

Please, do not consider this as a product benchmark. This is what I could personally test in my messy and limited lab.
Regnor
VeeaMVP
Posts: 934
Liked: 287 times
Joined: Jan 31, 2011 11:17 am
Full Name: Max
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by Regnor »

Hey Federico, although the topic is is on restore speeds, what backup rates do you achieve with the 5100?

https://forums.veeam.com/veeam-backup-r ... 51370.html
FedericoV
Technology Partner
Posts: 35
Liked: 37 times
Joined: Aug 21, 2017 3:27 pm
Full Name: Federico Venier
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by FedericoV »

Hi Regnor, thanks for the head up. I have written a comment in the other thread.
antony.marijanovic
Enthusiast
Posts: 57
Liked: 2 times
Joined: Feb 06, 2017 4:07 am
Full Name: Antony Marijanovic
Contact:

Re: Real world restore speeds using HPE StoreOnce 3540

Post by antony.marijanovic »

FedericoV wrote: Jun 13, 2018 8:03 pm I have done many restore tests using my StoreOnce 5100 in the last days and I have few numbers to share with you.


Please, do not consider this as a product benchmark. This is what I could personally test in my messy and limited lab.

Can you attempt a restore of the VM when it is part of a chain. When you are restoring from that Synthetic full the StoreOnce is only accessing 1 file per VM (4vms*1=4 files) If it have to restore from a restore point part of a chain, say full+6 incrementals then it will have to access 7 files per vm (4vms*7=28 files). I think this will be a more likely scenario than having to restore from the latest point every time.

Appreciate if you can test this.

Regards,
Antony
Post Reply

Who is online

Users browsing this forum: Bing [Bot], MikeMoenich, Yannis and 138 guests