-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
slow restores from HPE StoreOnce 5500
I would like to ask what kind of performance others have seen when doing Full VM restores from StoreOnce.
We have a brand new StoreOnce 5500 unit (with two Disk expansions) and our ingest speeds are very good, reaching 3-8 GbE of network throughput and Veeam reports a much higher "Processing rate" and usually reports Bottleneck: Source. We have ~600 VMs creating restore points every 24 hours with a total of 6 hotadd proxies running 8 concurrent tasks each. Each VM is composed of min. 3 maximum 32 disks.
However, full VM restores (to EagerZeroedThick disks) restore at only anywhere between 1-10MB/s per disk, for a total of less than 30 MB/s per VM.
Individual backup copy jobs from StoreOnce to a ReFS formatted disk repository (which itself is VMDK on a Datastore powered by Flash tier) on the same Gateway server, reach up to 130MB/s per BCJ (max 2 simultaneous jobs tested so far).
Restores from this non-storeonce repository can reach 300 MB/s if restored to an EagerZeroedThick disk (the source is on a Flash tier and the destination is a Flash tier as well).
We have cases open with both Veeam and HPE, and the response so far is mostly, "Restores from deduplicating appliances are known to be slow." I don't disagree with this statement, but I disagree that these restores should be 1/100th the speed of ingest slow -- if two BCJ can be read from StoreOnce at 270 MB/s combined, a VM restore should hypothetically be somewhere in that range rather than 7 MB/s.
Thanks!
We have a brand new StoreOnce 5500 unit (with two Disk expansions) and our ingest speeds are very good, reaching 3-8 GbE of network throughput and Veeam reports a much higher "Processing rate" and usually reports Bottleneck: Source. We have ~600 VMs creating restore points every 24 hours with a total of 6 hotadd proxies running 8 concurrent tasks each. Each VM is composed of min. 3 maximum 32 disks.
However, full VM restores (to EagerZeroedThick disks) restore at only anywhere between 1-10MB/s per disk, for a total of less than 30 MB/s per VM.
Individual backup copy jobs from StoreOnce to a ReFS formatted disk repository (which itself is VMDK on a Datastore powered by Flash tier) on the same Gateway server, reach up to 130MB/s per BCJ (max 2 simultaneous jobs tested so far).
Restores from this non-storeonce repository can reach 300 MB/s if restored to an EagerZeroedThick disk (the source is on a Flash tier and the destination is a Flash tier as well).
We have cases open with both Veeam and HPE, and the response so far is mostly, "Restores from deduplicating appliances are known to be slow." I don't disagree with this statement, but I disagree that these restores should be 1/100th the speed of ingest slow -- if two BCJ can be read from StoreOnce at 270 MB/s combined, a VM restore should hypothetically be somewhere in that range rather than 7 MB/s.
Thanks!
-
- Expert
- Posts: 206
- Liked: 41 times
- Joined: Nov 01, 2017 8:52 pm
- Full Name: blake dufour
- Contact:
Re: slow restores from HPE StoreOnce 5500
if you have long incremental chains it will be painful. also look at large blocks (16TB+), that has been shown to improve restores - i use this in my environment and its proven worthy.
veeam-backup-replication-f2/local-targe ... 31880.html
veeam-backup-replication-f2/local-targe ... 31880.html
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: slow restores from HPE StoreOnce 5500
bdoufour:
Thank you for the quick reply. We are using Large blocks (16TB+) as it is specified in the HPE+Veeam integration guide, and is recommended during setup (at least with 9.5 u3a) when it detects a Storeonce:// repository.
Our incremental chains are doing weekly synthetic fulls (which has skewed the StoreOnce deduplication numbers to a very silly 18:1 deduplication ratio with only a few weeks of operation).
The behavior I described is the same whether I select an incremental restore point or a Full. Well, perhaps the Full may be marginally faster since my testing is anecdotal and not scientific, but no where near satisfactory.
Thank you for the quick reply. We are using Large blocks (16TB+) as it is specified in the HPE+Veeam integration guide, and is recommended during setup (at least with 9.5 u3a) when it detects a Storeonce:// repository.
Our incremental chains are doing weekly synthetic fulls (which has skewed the StoreOnce deduplication numbers to a very silly 18:1 deduplication ratio with only a few weeks of operation).
The behavior I described is the same whether I select an incremental restore point or a Full. Well, perhaps the Full may be marginally faster since my testing is anecdotal and not scientific, but no where near satisfactory.
-
- Expert
- Posts: 206
- Liked: 41 times
- Joined: Nov 01, 2017 8:52 pm
- Full Name: blake dufour
- Contact:
Re: slow restores from HPE StoreOnce 5500
if youre running a 10ge backbone, as the HPE StoreOnce 5500 looks to be 10ge - you can try and switch to network transport, that is relatively easy and worth a shot to see if you get an improvement on your restores from the dedup. since youre using thick disk, u may try san transport as well, as long as you arent using vsan.
-
- VP, Product Management
- Posts: 7081
- Liked: 1511 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: slow restores from HPE StoreOnce 5500
Can you please share the support ticket number from Veeam here and reference the HPE ticket number in the Veeam ticket. We will have a look together. Thanks.
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: slow restores from HPE StoreOnce 5500
Andreas: 03182582
bdufour:
As mentioned in my original post: using a non-storeonce source repository, the restore speed is satisfactory, so I'd like to learn more about how the transport method might affect things.
bdufour:
Could you share why you think this may be faster? While our networking interfaces for VMs and for Storage are 10GbE, the interface which represents the FQDN of an ESXi host is 1GbE, and that appears to be the network interface used for NBD transfer. It would be possible, but not trivial, to change that across one or all hosts.you can try and switch to network transport, that is relatively easy and worth a shot to see if you get an improvement on your restores
As mentioned in my original post: using a non-storeonce source repository, the restore speed is satisfactory, so I'd like to learn more about how the transport method might affect things.
-
- Expert
- Posts: 206
- Liked: 41 times
- Joined: Nov 01, 2017 8:52 pm
- Full Name: blake dufour
- Contact:
Re: slow restores from HPE StoreOnce 5500
typically i run a test btw all transport modes to understand whats best for my environment. its rather simple, and you dont have to change all of the proxies either - which you understand. just take one proxy, and change it to network transport - and in the restore job make it use that proxy. do a restore - compare to your hotadd restore. i do this across backup/replication jobs as well. ive found i get about the same processing rate across both hotadd and network - but i like network, bc i dont have to worry about disk consolidation errors and i also use production vms as proxies bc we arent open 24 hours a day and pretty quiet in the night time. with network mode, backing up/replicating the production proxies is way easier. i cant use use san transport bc we use vsan. but ive heard good things about it and i think in your environment you can use it.
i dont think you will see some huge increase in your restore times, as everything seems to be configured right from what ive read here. prob not what u want to hear, but its a reality... but every little thing helps when you need to restore a vm in a DR situation. it takes me about 6 hours to restore a 1tb vm from my dedup. but we replicate offsite to protect at the DC level, and i also replicate onsite (production DC) to protect at the VM level for critical vms. i would always go to my replicas for critical vms in a DR situation.
i dont think you will see some huge increase in your restore times, as everything seems to be configured right from what ive read here. prob not what u want to hear, but its a reality... but every little thing helps when you need to restore a vm in a DR situation. it takes me about 6 hours to restore a 1tb vm from my dedup. but we replicate offsite to protect at the DC level, and i also replicate onsite (production DC) to protect at the VM level for critical vms. i would always go to my replicas for critical vms in a DR situation.
-
- Novice
- Posts: 4
- Liked: 1 time
- Joined: Apr 18, 2016 9:54 am
- Full Name: Kai
- Contact:
Re: slow restores from HPE StoreOnce 5500
Hello!
We have a StoreOnce4500 (FC-connected; 2 Stores with 450 TB backup-data, deduped around 40TB; 3 veeam-proxies) and similiar Problems around 2 years ago. Difficult to analyze the bottleneck. I spended weeks of trying and playing with a lot of different configurations. The significant advice for us was the following (recommendation of a Level 2 or 3 -engineer by HPE: strict divided time-Windows for backup-Jobs, C2T-jobs and especially the HOUSEKEEPING-timeframe of the StoreOnce itself. During HOUSEKEEPING the Storeonce-based Restore- and CopyJobs are really slow. Now we go in the following manner:
from 05pm to 01am backup, after that C2T and from around 10am to 04pm exclusive time for the housekeeping-process of Storeonce (only less concurrent backup-Jobs like SQL-Transactionlog)
In that way we achieve around 100MB/s for restores, not fast but acceptable. You can check the influence of the Housekeeping, when you pause the process during restore in the StoreOnce-GUI.
Hope it helps.
Regards!
PS: another hint: use the Option "quick rollback" under restoreMode for a full restore to minimize the datavolume
We have a StoreOnce4500 (FC-connected; 2 Stores with 450 TB backup-data, deduped around 40TB; 3 veeam-proxies) and similiar Problems around 2 years ago. Difficult to analyze the bottleneck. I spended weeks of trying and playing with a lot of different configurations. The significant advice for us was the following (recommendation of a Level 2 or 3 -engineer by HPE: strict divided time-Windows for backup-Jobs, C2T-jobs and especially the HOUSEKEEPING-timeframe of the StoreOnce itself. During HOUSEKEEPING the Storeonce-based Restore- and CopyJobs are really slow. Now we go in the following manner:
from 05pm to 01am backup, after that C2T and from around 10am to 04pm exclusive time for the housekeeping-process of Storeonce (only less concurrent backup-Jobs like SQL-Transactionlog)
In that way we achieve around 100MB/s for restores, not fast but acceptable. You can check the influence of the Housekeeping, when you pause the process during restore in the StoreOnce-GUI.
Hope it helps.
Regards!
PS: another hint: use the Option "quick rollback" under restoreMode for a full restore to minimize the datavolume
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: slow restores from HPE StoreOnce 5500
Hello oberhofer,
We've done comparison testing with regards to housekeeping blackouts and it was not the contributing factor to slow restores.
We've done comparison testing with regards to housekeeping blackouts and it was not the contributing factor to slow restores.
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: slow restores from HPE StoreOnce 5500
Thank you everyone who contributed ideas to this thread. My support cases with HPE and Veeam are concluded, and there is currently no solution for the drastic drop off in performance when restoring a VM made of multiple VMDK drives.
-
- Novice
- Posts: 4
- Liked: 1 time
- Joined: Jun 11, 2018 5:59 am
- Full Name: Andreas Buetler
- Contact:
Re: slow restores from HPE StoreOnce 5500
Hi
We had a few months ago also some problems with an HPE StoreOnce 4900. Slow backup and restores and other connectivity problem. We have tried over fiberchannel and also 10GBit ethernet.
HPE has said that we should buy the next generation of the HPE StoreOnce to so solve this issue. But we had in the last 3 generation of HPE disk based backup devices multible problems. After this we have evaluated ExaGrid as backup to disk device and we are very happy. Everythin is workig fine. All backup jobs now run succesfully and with great performance. Also restore tests are very fast. We will buy this year a second device for the offsite backup.
Regards
We had a few months ago also some problems with an HPE StoreOnce 4900. Slow backup and restores and other connectivity problem. We have tried over fiberchannel and also 10GBit ethernet.
HPE has said that we should buy the next generation of the HPE StoreOnce to so solve this issue. But we had in the last 3 generation of HPE disk based backup devices multible problems. After this we have evaluated ExaGrid as backup to disk device and we are very happy. Everythin is workig fine. All backup jobs now run succesfully and with great performance. Also restore tests are very fast. We will buy this year a second device for the offsite backup.
Regards
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: slow restores from HPE StoreOnce 5500
Hi sys-adm,
With 5500 we are very happy with ingest speed, we have saturated a single 10GbE link using 6 proxies of 8 tasks each to a single Gateway server. I believe the key to good ingest speeds is having ample bandwidth and compute on the gateway server, it wasn't until we approached 24 virtual CPU and 40GB of RAM on the gateway server that our performance monitor stopped complaining about CPU demand exceeding capacity.
So no complaints about ingest speeds, we think if we do an Active-Active 2x10GbE bonded connection from gateway server to StoreOnce 5500 it will be even faster (but not quite sure how to do this with a virtual machine at the moment).
Exagrid is specially mentioned in some documents found on Veeam's site as a special case, because of it's landing tier which works well for the most recent backups. You may want to observe the speed of restore of data which has left the landing tier, to make sure this also meets your expectations.
Example:
With 5500 we are very happy with ingest speed, we have saturated a single 10GbE link using 6 proxies of 8 tasks each to a single Gateway server. I believe the key to good ingest speeds is having ample bandwidth and compute on the gateway server, it wasn't until we approached 24 virtual CPU and 40GB of RAM on the gateway server that our performance monitor stopped complaining about CPU demand exceeding capacity.
So no complaints about ingest speeds, we think if we do an Active-Active 2x10GbE bonded connection from gateway server to StoreOnce 5500 it will be even faster (but not quite sure how to do this with a virtual machine at the moment).
Exagrid is specially mentioned in some documents found on Veeam's site as a special case, because of it's landing tier which works well for the most recent backups. You may want to observe the speed of restore of data which has left the landing tier, to make sure this also meets your expectations.
Example:
ExaGrid’s Landing Space and VeeamExaGrid’s unique landing space architecture lends itself to Veeam’s VMware and HyperV data protection features. The ExaGrid landing space is a high-speed cache that retains the most recent backups in complete form.
-
- VP, Product Management
- Posts: 7081
- Liked: 1511 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: slow restores from HPE StoreOnce 5500
In case you experience slow restore for VMs with multiple VMDKs, there are some enhancements made into Backup & Replication v9.5 Update 4 that leverage the StoreOnce read ahead cache better.
You can get Update 4 RTM at support (fully supported) by creating a new support ticket.
You can get Update 4 RTM at support (fully supported) by creating a new support ticket.
Who is online
Users browsing this forum: No registered users and 57 guests