Comprehensive data protection for all workloads
Post Reply
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Issues when backing up to an HP D2D4312

Post by chg_it »

Hi all,

Our backups recently started failing and the Veeam logs indicated our HP D2D was full. We hadn't received any alerts from HP and the D2D web interface was showing conflicting information so we logged a support call.

HP told us the interface issues is a bug they are working on and that our D2D is actually full. We took emergency measures and removed non essential backups to get our essential backups running. This however didn't clear any space. The ticket was escalated and we're waiting for a remote session to provide further assistance. However HP have said there is a known issue with Veeam and certain types of backup so I thought I'd share the response here:

-----------------------------------------------------------------------------------------

This is regarding the sub-case # 4706019140-472, where we have an issue with the disk space on the D2D unit.

I have checked the support ticket and could see that there is a NAS share configured on the unit and the backup application in use is Veeam.

The disk usage is being reported as full, though there is enough space on the other mount points. It appears to be an issue with the data not getting uniformly distributed across all the mount points. This has been noticed with backups to NAS shares using Veeam backup software.

From the logs, the mount points on the unit are as follows:

/dev/cciss/c0d1p1 4372082392 1932377976 2439704416 45% /tmp/dsm/4x8/a
/dev/cciss/c2d1p1 4372082392 2248683712 2123398680 52% /tmp/dsm/4x8/b
/dev/cciss/c2d1p2 2185975660 962506756 1223468904 45% /tmp/dsm/4x8/c
/dev/cciss/c2d1p4 728571172 329538936 399032236 46% /tmp/dsm/4x8/d
/dev/cciss/c2d1p3 1457273416 641077064 816196352 44% /tmp/dsm/4x8/e
/dev/cciss/c0d1p2 2185975660 1123881524 1062094136 52% /tmp/dsm/4x8/f
/dev/cciss/c0d1p4 728571172 724156932 4414240 100% /tmp/dsm/4x8/g
/dev/cciss/c0d1p3 1457273416 639920368 817353048 44% /tmp/dsm/4x8/h

Normally, we would expect the data to be uniformly written across all the mount points. The D2D system would display the space utilization depending on the highest used % of all the mount points, resulting in the erroneous reporting of the disk space (in our case it shows 100% used on the unit as the mount point ‘/tmp/dsm/4x8/g’ is at 100%).

This behavior is seen if there are a lot of Write In Place operations on a NAS Share, caused by the handling of files by the backup application. From the dfs0.metrics file, we could see that there were several WIP writes on this share:

Share Metrics
=============
Current.NoOfFiles:108
Current.NoOfDirectories:14

Current.WriteWIPRequests:20729404:6567179415

The above metric shows that, of the total number of writes (6567179415), 20729404 of them resulted in Write In Place requests.

The data in the WIP region does not get deduplicated and hence, would consume more space on the disk and would have an impact on the deduplication ratio on the store. It also causes the mount points to be imbalanced, as we could see above.

WIP operation normally happens if the backup application modifies data within the deduplicated data region on a file. It is expected that the backup application perform stream backups and either create a new file or append to the end of an existing file rather than accessing a file in the middle. Some backup applications provide the ability to perform a Synthetic Full backup, which does not work well with the dedupe system it produces a lot of Write In Place operations.

We have seen issues with Veeam backup application causing a lot of WIP data which will cause low dedupe ratio, decreased performance and requires extra space (the WIP data is not deduplicated!). The ‘Synthetic Full’ or ‘Reversed Incremental’ backup methods in Veeam would definitely cause a lot of WIP data as the files are re-opened and modified, by the backup application.

So, it would be important to check the backup application settings to ensure that the number of WIP requests is reduced, to avoid a similar issue in the future.

Few points that might be helpful from a Veeam perspective are:
1. Disable ‘Synthetic Full’ or ‘Reversed Incremental’ backup methods, if in use.
2. Disable Veeam compression.
3. Enable Veeam Inline deduplication.

Investigations are in progress to identify the issue, for the G2 units and we are waiting for a fix to be available, that would help in reducing the WIP files using Veeam. However, we would have to ensure that the above points are checked and the backup settings modified.

-----------------------------------------------------------------------------------------

I'll feed back more information but I've expressed my dissatisfaction as we specifically chose this appliance as our core Veeam backup destination, and some of the changes they have suggested seem to go against Veeam's recommendations for backup configuration. I spent a lot of time on the phone to Veeam support when these D2D appliances went in to ensure we were using the best setup possible so you can imagine we're quite frustrated!


T
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Issues when backing up to an HP D2D4312

Post by tsightler »

I'm curious what recommendations HP has made that you feel go against Veeam's recommendations for backup configuration. Certainly the suggestions HP lists in the email above (disable synthetic full/reverse incremental, disable Veeam compression, and enable Veeam dedupe) are 100% correct for writing to a dedupe appliance.
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

When we moved to a D2D solution we found the different backups methods particularly confusing, especially around when to schedule full backups etc. We had a remote support session to look at our jobs and as per recommendations unticked inline deduplication for all jobs and left the compression level to dedupe-friendly. We have a few reverse incremental jobs (small jobs so not a major issue).
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Issues when backing up to an HP D2D4312

Post by tsightler »

Whether Veeam inline deduplication is enabled/disabled isn't going to make much difference, but enabled will save a few writes to the storage since it eliminates Veeam writing duplicate blocks to disk if an identical block already exist in the repository. Dedupe-friendly will save some time in the Veeam backups since Veeam will hae to transfer slightly less data, but the cost is a reduced dedupe ratio on the appliance. Reverse incremental backups will create significantly more WIP than forward incremental jobs, which, as noted above, is a major issue for HP D2D devices.

In other words, all of HP's recommendations align with "best practice" settings for backing up to a dedupe appliance where the goal is to get the maximum amount of data stored on the appliance with the most efficient use of the storage space as possible.
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

Thanks for your comments. I'll look to make the changes as suggested and monitor disk usage from there.

With the volume of data we're backing up we found reverse incremental on our smaller jobs made a huge difference in terms of backup window but will look to revisit.

Thanks all.
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

Could anyone also suggest a setting for storage optimisations? Thanks
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Issues when backing up to an HP D2D4312

Post by foggy »

Here is another existing topic on recommended settings for dedupe appliances.

Remember, that changes in compression and deduplication settings are not applied to the previous backup chains. The changes will take effect only when a new full backup is created.
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Issues when backing up to an HP D2D4312

Post by Andreas Neufert »

Please check if you have actual Firmware on the D2D as well.
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

HI all, thanks for the responses.

We are running the latest release of the D2D firmware now (3.6.2) which HP have told me reduces the risk of WIP files being created in certain scenarios when Veeam backups are written to it. Interestingly HP said the white paper that currently exists for HP D2D/Veeam usage is significantly out-of-date and needs to be updated to reflect correct usage for v6.5. e.g. it recommends the use of Synthetic Fulls in this document.

Thanks
Andreas Neufert
VP, Product Management
Posts: 6707
Liked: 1401 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Issues when backing up to an HP D2D4312

Post by Andreas Neufert »

Yes, 3.6.2 will repair what they broke in the previous firmwares.

Active Full vs. Syntetic Full is hard question.

Active Full => Pain to Read all Blocks from produktion storage => to reduce time and load on the backup target storage.
If you look at Snapshot commit at active full ... and Syntetic full is a downstream process => Syntetic Fulls seams to be my favorite for this.
Veeam writes at active full random writes (metadata actualization) as well so it isn´t an sequential write as expected.
My idea is to v6.5 Syntetic Full, Compression: Dedup friendly, Dedup: Off ... What do you think?
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

Hi Andreas,

I'm pretty much in agreement, but the HP guy said that Synthetic fulls should still not be used with the most recent firmware.

He said the original issue was: once you've written your full backup onto the D2D, and subsequently you perform a synthetic full, injecting changes as such on the storage device, these changes won't be de-duped and will exist as WIP files.

He said it wasn't such an issue with the latest firmware but they had still seen it occasionally.


Thanks
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Issues when backing up to an HP D2D4312

Post by tsightler » 1 person likes this post

The primary reason for the recommendation to use Active fulls on any dedupe appliance is performance. Synthetic fulls require random R/W I/O on the target, this is far more stressful on the target storage than an Active full. While an Active full may very well perform some random I/O, it's pretty much exclusively write I/O on the target. Most dedupe appliance are already I/O starved, especially on reads, because rehydration requires each read block to be broken down into many random read I/Os to grab all of the small deduplicated segments. They try to compensate for this during restores by performing extensive read-ahead. However, in the Veeam case when running a Synthetic full we are reading blocks from all the various files and then immediately writing those files back to the same disk, which can quickly saturate the backend storage.

However, if you find that performance of synthetic fulls is acceptable in your case then I'm not sure there's any real reason not to use them. The best practice recommendation not to use them comes from the fact that, in many, many cases, a synthetic full on a dedupe appliance will take more than 24 hours, in some cases much more, which is just not acceptable in most environments. That being said, I do sometimes run across customers that are happily running synthetic fulls on their dedupe appliance. Usually these are smaller customers with at most only a dozen TBs of data or so.
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

Hi Tom, thanks for explaining in further detail, really helpful. For us the time window wasn't an issue, as HP explained it the issue was injecting changed data into the original de-duped backup. This data was not being de-duped and had to be stored as WIP files separate from the de-duped data, and hence led to huge wastage on the storage volume.

I've asked for some additional info now that we're on the G3 platform.
emachabert
Veeam Vanguard
Posts: 388
Liked: 168 times
Joined: Nov 17, 2010 11:42 am
Full Name: Eric Machabert
Location: France
Contact:

Re: Issues when backing up to an HP D2D4312

Post by emachabert »

Did you check the whipe paper I wrote based on a real world case. Perhaps it could help.

http://go.veeam.com/wp-veeam-and-hp-eri ... t-2013-en/
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
chg_it
Influencer
Posts: 21
Liked: never
Joined: Jun 08, 2010 2:07 pm
Full Name: IT Department
Contact:

Re: Issues when backing up to an HP D2D4312

Post by chg_it »

Thanks Eric, I've downloaded and will have a read. We're currently up and running and monitoring closely!
Post Reply

Who is online

Users browsing this forum: Mildur, Semrush [Bot] and 281 guests