Comprehensive data protection for all workloads
Post Reply
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Performance woes to new HP D2D StoreOnce

Post by TaylorB »

I'm on day 2 of using my new D2D box as a backup target. It's an HP StorageWorks D2D4324 G2 and it does deduplication and compression. I elected to start over with a new backup job rather than move my older jobs to the box and continue on.

I have my job set up with the forever incrementals with a synthetic full. Compression is disabled (As recommended) and Storage is set to LAN target.

First Full backup took 26 hours for about 4TB of VMs which resulted in a 2.2TB file. That seems OK to me. However, last night's incremental is still running after 15 hours and looks to take several more. So why would my inrementals take as long as a full? The backup file with 3/4 of the VMs finished is only 100GB, which seems about right based on previous runs on my old jobs. However, I am getting the same throughput as with the full - 44MB/s. With my previous setup (writing to a local SAS disk array) I would get about 10x those speeds since there is only a few percent of the data actually changing. I was averaging 2 mins per VM and now I am at 20 mins.

Any ideas on why the incremental is so slow? Full backups are about 2-3 as slow as with my old jobs on the local array, but that is OK because I have all weekend. However, the incrementals used to take 1 hour and now I can't finish them overnight. The only thing I can point to is that the job seems to be doing 10 minutes of reading prior to backing up each VM. Network throughput is only using about 25% of the link at peak.

I might have to back up to local SAS and copy to the array when done, which I don't really want to do but I need to make about an 8 hour window here.
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new D2D

Post by Gostev »

TaylorB wrote:I might have to back up to local SAS and copy to the array when done, which I don't really want to do but I need to make about an 8 hour window here.
That's really strange. I have not seen any performance issues with D2D and forward incremental backups. Good idea to try backing up the same VM to a locally attached raw storage. This way you will see if the issue if storage (or connection to it), or something else.
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new D2D

Post by TaylorB »

Veeam is running on the same server and backing up the same VMs as before.

Writing to the D2D box:

Full = 26hours resulting in 2.2TB file
Incremental = 25 hours = 500GB file.

Doesn't seem possible. Network utilitization is under 25%. I do have integrity checks enabled, does that take a long time? I am supposed to do these incrementals daily, so 25 hours is about 3x longer than acceptable. Not terribly impressed with what the HP box has been giving me, especially given the 6 figure price tag. Luckily I had nothing to do with the project, though! :)

Writing to local SAS takes about 3 hours for same incremental job.



So since it seems like I might need to go with plan B, what is the best way to write to local disk first and then copy out to the D2D next? I can only keep about 10-14 days on my local array, but need 6-8 weeks on the D2D box. Can I just set a 10 days retention period and then copy them out to the other storage before they fall off the local SAS? The database would have no idea about all those files out on the D2D, though, so would I Just need to import the older files if I Need to do a restore of older data?

Thanks.
Vitaliy S.
VP, Product Management
Posts: 27364
Liked: 2794 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Performance woes to new D2D

Post by Vitaliy S. »

TaylorB wrote:what is the best way to write to local disk first and then copy out to the D2D next? I can only keep about 10-14 days on my local array, but need 6-8 weeks on the D2D box. Can I just set a 10 days retention period and then copy them out to the other storage before they fall off the local SAS?
You can either use post-backup job script to offload backup chain to the dedupe device or engage Windows Task Scheduler to trigger a script on the given time and date to transfer all the files to that device.
TaylorB wrote:The database would have no idea about all those files out on the D2D, though, so would I Just need to import the older files if I Need to do a restore of older data?
Yes, that's correct, you would need to import the entire backup chain (VBK+VIB) to the backup console.
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new D2D

Post by TaylorB »

Thank you for the ideas. I would still like to skip that step if possible.

I have noticed each backup job does about 10-15 minutes of reads before it starts writing. This appears to be the bottleneck as the D2D box has painfully slow reads compared to my local SAS array. Is this read process comparing the older backup data with the VM to compare what blocks need to be backup up for the incremental?
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new D2D

Post by Gostev »

No, it gets metadata from previous backup files (essentially hashes of already backed up data blocks) for our source-side dedupe. This way, already backed up block do not have to be transferred to backup target again.

Changed blocks are determined with VMware API call (assuming that CBT is enabled in the job).

Interesting that you are having this issue, because we have not seen it on G2 device we have been testing (HP D2F4106FC G2).
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new D2D

Post by TaylorB »

Gostev wrote:Interesting that you are having this issue, because we have not seen it on G2 device we have been testing (HP D2F4106FC G2).
Can you tell me what kind of throughput you are getting? I average about 40-50 MB/s for both full and incremental. Doesn't seem to matter what dedupe and compression settings I use.
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Gostev »

As far as I remember, it was 50 MB/s for full backup to D2D CIFS share, and 55MB/s to CIFS share backed by raw disk storage.

Incremental throughtput values depend on the amount of changes and VM disk size and cannot be directly compared, but generally under normal circumstances they should be a few times faster than full backup.
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by TaylorB »

I switched back to local disk and I am still getting pretty dismal performance compared to before. It just sits and does reads for about 10-15 minutes per VM before it starts writing no matter if I am doing incremental or full backups. This adds 15+ hours total to the backup process for 80 VMs. I've gone over the settings multiple times and the only difference from before is that I have one big job for the whole data center rather than two seperate jobs, one for each of my two clusters.
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Gostev »

Looks like there was some other change in the environment that you did not notice. Otherwise you would be getting the exact same numbers as you had reported with local disks above obviously...
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by TaylorB »

Reboot of Veeam server fixed the performance to local SAS, but I still have the same problem on the D2D. So at least I can just go back to disk for now and maybe see if HP has any ideas for me.
mooreka777
Influencer
Posts: 18
Liked: 1 time
Joined: Jan 20, 2011 1:05 pm
Full Name: K Moore
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by mooreka777 »

hello there,

I hope this helps out people working with the hp d2d. we are evaluating a couple and think they work well for backing up, but please please test restore speeds. The 4324 is a great appliance, but make sure you test!

D2D Setup
-------------
Can you give me a little background on the d2d setup? Do you have replication turned on? If so, check the in/out on the NIC and see if you are saturating the 1g interface. For some reason, I don't think HP's LAG works. I did my best in testing to push over 1G and it hung around 900Mb. I have a 4324 on demo that should be here next week and need to see what 10G will give me.

Ideas
-------

1 - sometimes veeam fails over to network for VM backup. this takes a freakishly long time to backup VM's and isn't the fault of the appliance. So, look there first. If in fact, you are using Virtual Appliance as backup move to setup 2.

2 - check to see how the backup job is configured. You have to turn off the compression. HP compression and Veeam compression working simultaneously will kill the performance.

Make sure you have it configured like this:

Dedup - Check
Compression - None
Storage - Optimized for LAN target

---------------

Please let me know your findings with these changes.


As a side not, I found some odd behavior with the appliance that the cifs shares just go offline. I add/remove a CIFS share and they all come back. I send in the debug info and opened a call with HP, but haven't heard back.

-Kelly
mooreka777
Influencer
Posts: 18
Liked: 1 time
Joined: Jan 20, 2011 1:05 pm
Full Name: K Moore
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by mooreka777 »

> First Full backup took 26 hours for about 4TB of VMs which resulted in a 2.2TB file. That seems OK to me. However, last night's incremental is still running after 15 hours and looks to take several more. So why would my inrementals take as long as a full? The backup file with 3/4 of the VMs finished is only 100GB, which seems about right based on previous runs on my old jobs. However, I am getting the same throughput as with the full - 44MB/s. With my previous setup (writing to a local SAS disk array) I would get about 10x those speeds since there is only a few percent of the data actually changing. I was averaging 2 mins per VM and now I am at 20 mins.

Do you have all 4Tb in 1 job? If so, break it out into 1Tb to 2Tb jobs. That is for veeam optimization. Keep in mind that you can't restore a VM while a backup is running. so, if you have one job you have to wait for it to finish before you can restore anything. Odd behavior, but it is a feature ;)

Also, if you ever try and import an 88VM backup job it takes forever and times out. Again, a feature of veeam. So, 20-30 VM's and under 1T is my basic criteria. I really wish there was a "queuing" feature so I could have a single job for every VM since compression is on the dedup side. Then start all of the backups at 6PM and only allow 5 jobs to run at a time via "queue". Remember, tapes were easier. If you had eight tape drives, only eight backups could run at a time. Now, as long as the target is available backups run, even at the expense of performance.

Best of luck and let me know how it goes.

-Kelly
mooreka777
Influencer
Posts: 18
Liked: 1 time
Joined: Jan 20, 2011 1:05 pm
Full Name: K Moore
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by mooreka777 »

I just thought about this a bit more and it kind of makes sense why the fulls are fine - little compression. When the incremental kick in, or the more data is in the appliance, it tries to compress more. Well, if the data is compressed via veeam it has to use CPU cycles to interpret the data, then try and compress it. Not so good.
Limey
Technology Partner
Posts: 1
Liked: never
Joined: Aug 23, 2011 3:39 pm
Full Name: Gary Marriner
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Limey »

Hi I work for HP and I would really appreciate details of your system configuration. Maybe you could send me a private message with your email details and kick off a dialogue. Thanks.
innerhobbit
Technology Partner
Posts: 1
Liked: never
Joined: Sep 30, 2011 8:18 pm
Full Name: Matt Jacoby
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by innerhobbit »

Greetings TaylorB. I am the HP DP Solutions Engineer who has been working with Veeam to certify the StoreOnce D2D products with Veeam Backup and Replication V5. As part of our test plan, tests were run on both Forward and Reverse incremental backup methods. Testing results showed that when using the Forward Incremental backup strategy, the first incremental run after the first full backup (with changed data) completed 80% faster than the full backup. This was expected since this is more the "traditional" approach to backup. However, when the same test was run with Reverse incremental as the backup method, the first incremental after the first full took just as long to complete as the first full backup! But this is expected behavior as well since the incremental backup is being rebuilt as a full. This is why HP and Veeam recommend using the traditional backup approach (forward incremental). However, I am not clear from your posts if you used Forward or Reverse Incremental as your backup strategy. I can only assume you chose the later.

As this was recently brought to my attention (a month late), HP is willing to work with you to understand the issue with your performance. So if you are still monitoring this thread, you can contact me and I will work with you directly to get this resolved. Just send me a private message.

Thanks.
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by TaylorB »

Sorry, I haven't checked in for a few months. We have worked with HP support a few times and it hasn't done much good. I've given up on using this HP D2D box for Veeam. Even if i use workarounds to get the data on the box (which is still slow and painful), I still don't get more than 2.5:1 dedupe. I have to turn off Veeam Compression to use the box so that just gets me back to where I would have been on regular disk. HP engineers just say it is part of the "D2D learning curve", but all I have learned is that certain data sets (like huge veeam backup files) are not well suited to these kinds of devices. Veeam is already capable of deduping and compressing, so the HP StoreOnce has nothing left to offer other than a slow and expensive place to dump files.

For regular file-level backups of small files on non-VM machines, it is working well and can replace a tape library. We will use it for that, but for Veeam, I'd recommend just getting a cheap regular iSCSI NAS box and call it a day.
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Gostev »

Hi Taylor, I also have news since then. If you want to get good dedupe ratio backing up to HP StoreOnce, you need to do this:
1. Have all jobs backing up to the same share. Apparently, StoreOnce is not yet capable of deduping the data between different shares.
2. Disable compression in the Veeam job settings (this one is really not HP specific, recommended for any deduping target - compression kills most dedupe algorithms). While disabling compression will usually drop the Veeam backup job performance significantly (more data to move around), the dedupe ratio should get significant boost as well.
Thanks!
TaylorB
Enthusiast
Posts: 92
Liked: 14 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by TaylorB »

Gostev wrote:Hi Taylor, I also have news since then. If you want to get good dedupe ratio backing up to HP StoreOnce, you need to do this:
1. Have all jobs backing up to the same share. Apparently, StoreOnce is not yet capable of deduping the data between different shares.
2. Disable compression in the Veeam job settings (this one is really not HP specific, recommended for any deduping target - compression kills most dedupe algorithms). While disabling compression will usually drop the Veeam backup job performance significantly (more data to move around), the dedupe ratio should get significant boost as well.
Thanks!
I've tried all that based on the Whitepaper from HP.

Removing compression doubles the dedupe ratios but also more than doubles the initial file size while significantly decreasing the performance. it's a huge net loss, honestly.

It's also already not a speedy solution and if you add in any traditional backups with the Virtual Tape Library component, that cuts the speed left for Veeam in half.


To be honest, I just don't see wasting any more time on it. If I had any say in it, we'd return it and I'd get another set of direct attached storage and figure out how to replicate it to the DR another way.
matthewrawles
Lurker
Posts: 1
Liked: never
Joined: Jan 09, 2012 7:17 pm
Full Name: Matthew Rawles
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by matthewrawles »

innerhobbit wrote:Greetings TaylorB. I am the HP DP Solutions Engineer who has been working with Veeam to certify the StoreOnce D2D products with Veeam Backup and Replication V5. As part of our test plan, tests were run on both Forward and Reverse incremental backup methods. Testing results showed that when using the Forward Incremental backup strategy, the first incremental run after the first full backup (with changed data) completed 80% faster than the full backup. This was expected since this is more the "traditional" approach to backup. However, when the same test was run with Reverse incremental as the backup method, the first incremental after the first full took just as long to complete as the first full backup! But this is expected behavior as well since the incremental backup is being rebuilt as a full. This is why HP and Veeam recommend using the traditional backup approach (forward incremental). However, I am not clear from your posts if you used Forward or Reverse Incremental as your backup strategy. I can only assume you chose the later.

As this was recently brought to my attention (a month late), HP is willing to work with you to understand the issue with your performance. So if you are still monitoring this thread, you can contact me and I will work with you directly to get this resolved. Just send me a private message.

Thanks.
Hi, We have just installed the hp 4324 d2d for our veeam backups, with 10Gbe between the veeam backup server (an HP DL380 G7) , our new procurve 5412xl and the D2D.

We are seeing very low write and even lower read rates from the D2D.

An example is a 1TB backup file for a windows vm, to extract the entire file back to the veeam servers local disk takes over 6 hours (at about 40MB/s on the windows file copy dialogue).

Opening a veeam backup directly off the D2D CIFS share is impossibly slow.. 20 minutes+ to open the file to view the index... then to pull a folder from it runs at a horrible 700KB/s, the same extraction from the local SAS storage runs at 14MB/s

So far i'm dissappointed by the performance, for a device with a 96k list price i may have been better off with a large USB drive (USB 3 can give my 600MB/s !!!!)

I don't suppose innerhobbit is listening and offering any help ? if I could work out how to message him I would!

Regards

Matthew
foggy
Veeam Software
Posts: 21137
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by foggy »

Matthew, you can try to PM him, I guess he will be notified by e-mail.
deevie
Influencer
Posts: 11
Liked: 1 time
Joined: Jan 22, 2012 8:58 pm
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by deevie »

Do you use flow-control on the network links between the backupserver and the D2D?
tmagnussen
Lurker
Posts: 1
Liked: never
Joined: Mar 15, 2012 1:33 pm
Full Name: Thomas Magnussen
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by tmagnussen »

We have a HP StorageWorks D2D4106i - G2 and its also VERY slow.
Almost impossible to run single item recovery off this crappy product.

When we run backup towards our nas freebsd 9 with ZFS its slow also but we can atleast restore from it.

That also has dedup and is 2x faster than this VERY expensive D2D product....

Wonder when HP starts thinking and doing something about this product.

And then you have the blackout windows that has to occure when "backup" is running, which with slow D2D can never happen.

Where i used to work we had Datadomain products. They had MUCH better performance, and prestaging area so you did not suffer from speed and blackout window problems.

Bye bye D2D!
MagnusKlaren
Lurker
Posts: 2
Liked: never
Joined: Aug 07, 2012 5:17 am
Full Name: Magnus Klarén

Re: Performance woes to new HP D2D StoreOnce

Post by MagnusKlaren »

Hi all!

I´m a bit curious on how other people are running their Veeam/D2d setups.


Veeam: 6.0.0.181 on a physical Hp box
Job Configuration: Approx 20 jobs at datastorelevel
Backup Target: Hp D2d 4206i with single Cifs share


When we started adding jobs to Veeam we also started to get access denied on the backup target.
After some digging and great help from Hp support we pinned it down to the maximum number of open files per share on a 4106 is 64 (128 in total per box).

The limitiations on the Veeam proxy and repository were adjusted to handle the configuration, but..
Even if the limitations are in effect, Veeam will still authenticate towards the backup target and use up sessions.

The workaround was to spread the jobs during the backupwindow to avoid conncurrent queued jobs.
In a perfect world (not IT! :-)) this would work, but sometimes a job runs a bit longer or you have a big environment and then the error appears.

If you use a "general purpose" i´m sure that the preauthentication to the target is a great way to speed up the process a bit.
But if you have "specialized" target like a D2d box, the Veem should handle the queues internally to avoid sessiondraining on the target.


Anyone running with similar setup with the same issues or nonissues?

//Magnus Klarén
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Gostev »

Hi, did you consider setting max concurrent task value in the backup repository settings?
MagnusKlaren
Lurker
Posts: 2
Liked: never
Joined: Aug 07, 2012 5:17 am
Full Name: Magnus Klarén

Re: Performance woes to new HP D2D StoreOnce

Post by MagnusKlaren »

Yes, still authenticates towards target.

//Magnus
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Performance woes to new HP D2D StoreOnce

Post by Gostev »

Made me smile :) I did not mean this would change that part, of course... but this would definitely prevent backup repository to be overwhelmed with tasks in case some of them take longer than expected.
Post Reply

Who is online

Users browsing this forum: mikeely and 68 guests