Comprehensive data protection for all workloads
Post Reply
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

Hi.

We are trying to use a HP StoreOnce B6200 as CIFS NAS repository for Veeam B&R 6.5 with mixed results.

How are the backup files being written to the repository?
When writing to a CIFS share will there be a lot of write in place (WIP) done?

A bit of context:
The setup is quite simple. We have some 140 Jobs, pulling data from 30 or so ESX's with 10 proxies nodes and about 80 TB of data to backup up.

When trying to use the HP B6200 we did the following:
Job setup: no inline dedupe, no compression, full once a week, inc in between.
Repository: 10 simultaneous writes, decompress and align data checked.

Connection to the HP B6200 is 4x10 Gbps Ethernet.

(Running this to "local" FC disk with max compression, inline deupe etc is working very well.)

BR Tobias
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by tsightler »

Since you already have compression disabled on the jobs, you really don't need the decompress options on the repository, or the align data option (HP StoreOnce is variable block), but I wouldn't expect either of those to make much difference overall.

Using standard backup/incremental it really isn't much different than just copying a file via CIFS, there are some forced metadata flushes to the file, but they are still just writes. Perhaps you can describe in more detail what "mixed results" actually means.
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

tsightler wrote:Since you already have compression disabled on the jobs, you really don't need the decompress options on the repository, or the align data option (HP StoreOnce is variable block), but I wouldn't expect either of those to make much difference overall.
True. It is a redundancy in option there in this particular case.
tsightler wrote: Using standard backup/incremental it really isn't much different than just copying a file via CIFS, there are some forced metadata flushes to the file, but they are still just writes. Perhaps you can describe in more detail what "mixed results" actually means.
I will try to expand a bit, file systems is not my area of expertise so I'm doing a bit of guess work here.

What I am asking is how the backup files are actually written, how are the file being allocated and how does the write of data occur with in the backup file.

I was under the assumption that it was supposed to be as you say "just as copying a file to the CIFS share", however HP claim that in our case Veeam does a lot of Write-In-Place (WIP) within the files and that will not play nice with almost any real time dedupe appliances.

In our tests that manifests as a massive amount of overhead data within the StoreOnce file system. For example lets say we write (backed-up) about 10TB of uncompressed, non deduped Veeam data to the share the StoreOnce would claim that the "Size on Disk" would be some 1TB but looking are the file system it was clear it had eaten away some 5TB, one explanation to this could come from excessive WIP writes HP claims. Not good by any means.

The vast majority of file system are doing WIP just fine; an application wants to update a couple of blocks in a file?; Great just replace them. There are also for example WIFS file system; "write in free space" and the the application write would go to a free block and the old block would typically the marked as free via a cleaning job later.
Veeam per say would not know what type of file system its writing to however if you write "1001001000110" to a file and then change it to "1001000000111" and then to "1101000000101" and so on, then write-in-place would happen from a file point of view. If this is the way our current setup of Veeam is writing the backup files to the CIFS share then we would get a lot of WIP-writes. I would for example assume doing synthetic fulls would produce this behaviour but not normal fulls with incs.

If you on the other hand just write a bunch of data into a file in a sequential way, append if you like, then no WIP would occur and that would be just as copying a file from once place to another.

Does that make any sense?
Anyway now I find myself ponder on how the backup files from Veeam are actually being written to a CIFS share.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Yuki »

not 100% clear on how HP system you have handles this, but from what i understand about Veem - in case of forward incremental and full backups, the system just creates new files and sends them to the target. It's up to the storage to decide how and where to place them on disk. So depending on how your StoreOnce works, it may elect to put it in actual free space, or overwrite "dirty" free space, but it should not be updating any files with new blocks (unless it does deduplication and no whole files exist anyway).

On verse incremental, on the other hand, the full backup file is updated with new blocks (which is why it always grows and never shrinks).
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

Yuki: I would not argue with you on that, that is they way I see it too.

I have a case open, #00198223, on this now and so far I have gotten the input that Veeam utilizes the Windows engine to talk, read and write to a CIFS share but that the question is with the R&D team.

BR Tobias
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

Got some input from Rustam at Veeam Support today, be it on a hi level.
So, here is brief explanation of our vbk/vib storage file:
It has header, metadata blocks, data blocks. First metadata block is usually allocated at the beginning of the file(position in file). Then as the backup goes we update metadata blocks (WIP) and write data blocks continuously to file. When metadata block is full, we allocate another metadata block, then continue writing data and updating that metadata block(WIP), and we go on and on writing to file and allocating meta.
So this is short description of our backup structure. Our storage format proves to be very reliable and we can't change that.
Now this is quite interesting because it suggest that WIP is from a file level point of view done a lot when creating a backup file. This would render almost every file system that does not handle WIP in a good manner a poor choice for a Veeam Repository due to the way a Veeam backup file is created.

Also suggested by Veeam Support is creating the backup files on a file system that handles this well and then moving them off to dedupe storage, this would most certainly work very well be it require quite a lot of more disk space.

BR Tobias
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by tsightler »

This is what I was describing regarding flushing metadata, although perhaps I was not clear that metadata is performing WIP. However, the metadata is a very small percentage of the data being written to disk. I work with customers all the time that write huge amounts of Veeam backups directly to dedupe appliances and none have every reported any behavior like this. You seem to have made the assumption that what HP is telling you is definitely 100% correct, and I understand they should know how they're product works, but I can't for the life of me come up with anything that would explain why, even with WIP, this should possibly cause 5TB of filesystem usage instead of 1TB. Are they saying their chunk store is growing this large because of all the WIP data? Certainly they have some garbage collection routine that clears the old chunks so this would be at most temporary.
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

tsightler wrote:I work with customers all the time that write huge amounts of Veeam backups directly to dedupe appliances and none have every reported any behavior like this.
This is why I am puzzled.

As I said:
TobiasE wrote:This would render almost every file system that does not handle WIP in a good manner a poor choice for a Veeam Repository due to the way a Veeam backup file is created.
What this suggest, if one would have a huge amount of WIP, isn't that all dedupe appliances would be bad only the ones not doing a proper job cleaning up WIP data. After all we are only interested in saving the latest version of data written to a particular file. But then again there is no 100% proof of anything yet.

tsightler wrote:You seem to have made the assumption that what HP is telling you is definitely 100% correct, and I understand they should know how they're product works, but I can't for the life of me come up with anything that would explain why, even with WIP, this should possibly cause 5TB of filesystem usage instead of 1TB. Are they saying their chunk store is growing this large because of all the WIP data? Certainly they have some garbage collection routine that clears the old chunks so this would be at most temporary.
Absolutely not. From the beginning HP claimed they have StoreOnce working very well with Veeam however it seems that no tests with HP StoreOnce have actually checked the file system. I have no proof of this in any way but it is the picture that I am seeing at the moment. At first it was my belief that there is something wrong with the setup or this particular StoreOnce but the latest claims from HP is that the culprit it the case is the fact that Veeam is wring to much WIP data and that indeed is to blame for the huge disc usage. I can't actually check if this is the case myself, you need shell access to the StoreOnce for this but I can see the raw harddrive usage.

In a nut shell: If is were 100% true then I would assume that we would see the issue on a wide scale but since that is not the case...
I am being told HP is working with Veeam on the issue it will be interesting to see what comes out of it.
yizhar
Service Provider
Posts: 181
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by yizhar » 1 person likes this post

HI.

You mentioned:
> Job setup: no inline dedupe, no compression, full once a week, inc in between

Have you considered enabling "inline dedup" in Veeam jobs?
I think that this can improve your performance and disk utiliczation, while the HP applianced will provide second level and global dedup.

Yizhar
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Gostev »

Agree with Yizhar. There is no sense in disabling built-in dedupe in Veeam when backing up to deduplicating storage appliances.
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

yizhar wrote: Have you considered enabling "inline dedup" in Veeam jobs?
I have, this is they way we started out since that is the recommended setup.
In our case this setting however caused more raw diskspace to be consumed with the StoreOnce unit so it was disabled. Having it on does generally improve backup performance.

I have not seen this type of behavior with anything else but the HP StoreOnce B6200 so I would say that there is something fishy here. HP have now sent us an HP MSA with a Proliant Server so we will backup to that while they try to figure out whats the problem with our StoreOnce unit in combination with Veeam.

BR Tobias.
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Gostev »

Hi Tobias, I am hearing somewhat contradictory information about your case, so I thought I would just ask you directly.

What specifically is the problem you are facing:
1. StoreOnce disk space consumption issue due to excessive WIP writes
2. Poor performance backing up to StoreOnce
3. Both
Tobias_Elfstrom wrote: HP have now sent us an HP MSA with a Proliant Server so we will backup to that while they try to figure out whats the problem with our StoreOnce unit in combination with Veeam.
Very nice of them! Great target for Veeam!
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

Gostev wrote:Hi Tobias, I am hearing somewhat contradictory information about your case, so I thought I would just ask you directly.

What specifically is the problem you are facing:
1. StoreOnce disk space consumption issue due to excessive WIP writes
2. Poor performance backing up to StoreOnce
3. Both
Number 1 is at the moment the Big Issue.

When we first set up the B6200 unit we had some speed issues but that came down to poor (active) cabling, having replaced the HP cabling with Cisco branded Twinax speed when't up by a factor of 100x when connecting to our Cisco Nexus switches. I've been told HP Best Practice Setup for StoreOnce have now been updated with this information.
However one could possibly argue that doing a lot of WIP most probably have an impact on performance as well.

BR Tobias
jwhite.vcf
Novice
Posts: 5
Liked: never
Joined: Apr 12, 2012 7:25 pm
Full Name: John White
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by jwhite.vcf »

Any updates/resolution to this issue in the intervening year? With StoreOnce being released as a VSA product in July 2013, I'm wondering about the appropriateness vs the native dedupe baked into Veeam (especially with v7).
chrisdearden
Veteran
Posts: 1531
Liked: 226 times
Joined: Jul 21, 2010 9:47 am
Full Name: Chris Dearden
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by chrisdearden »

I believe there has been a code fix sent out by HP which seems to have improoved the situation considerably. Our Native Dedupe is within the backup jobs - there is still benefit from using volume level deduplication , such as that provided by a dedicated dedupe device.
yizhar
Service Provider
Posts: 181
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by yizhar »

Hi.

Just to make sure:

Have you try in the job to:
enable deduplication, but disable compression?

Have you checked the repository configuration titled:
"decompress data blocks before storing"

Yizhar
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

chrisdearden wrote:I believe there has been a code fix sent out by HP which seems to have improoved the situation considerably. Our Native Dedupe is within the backup jobs - there is still benefit from using volume level deduplication , such as that provided by a dedicated dedupe device.
That is correct, there are some fixes that I'm being told should help with the problem but as long as there is WIP on a file level any dedupe engine that does inline dedupe and does not do proper cleaning afterwords will have problems with this. For us HP have replaced our the StoreOnce setup with other type of storage so I can't verify the fixes. (Very good of them, I might add.)

However in order to avoid the problem with WIP when writing backupfiles a post write dedupe engine will probably gain you more. Such system would for example be a Exagrid system or a Microsoft Windows 2012 server with dedupe enabled disks.

BR Tobias
Tobias_Elfstrom
Enthusiast
Posts: 84
Liked: 8 times
Joined: Jul 04, 2012 6:32 am
Full Name: Tobias Elfstrom
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by Tobias_Elfstrom »

And just to be clear; since this thread started it was verified that the problem was not just with our setup and it was a problem for other customers as well. Customers doing mixed loads in t here StoreOnce units did not notice it as clearly as we did however, especially since the StoreOnce GUI presented misleading information about disk usage to the user.

You have the same issue with any inline dedupe system I might add, it is all about how you handle the WIP that happened when saving backup files directly to the system.

BR Tobias
IanB
Technology Partner
Posts: 1
Liked: 2 times
Joined: Aug 15, 2013 1:18 pm
Full Name: Ian Blatchford
Contact:

Re: Writing to CIFS share, how are the backup files written?

Post by IanB » 2 people like this post

StoreOnce 3.6 Software improves space efficiency of Veeam backups

Re: this string on writing Veeam backups to a CIFS Share on a StoreOnce disk deduplicating backup appliance. HP is aware of the issues seen by some Veeam users when using StoreOnce CIFS shares as the backup target. The 3.6.2 StoreOnce software significantly improved performance by reducing the CIFS file overhead (a.k.a.WIP files - as referred to in earlier posts). As a result, the recommendation is to upgrade to StoreOnce software 3.6.2 which is available via the ‘Support & Drivers’ pages on http://www.hp.com. This software update reduces the CIFS overhead of future backups. However the overhead created for existing backups will remain until the existing backups are expired from the StoreOnce CIFS share. The fastest way to benefit from the increased backup efficiency is to create a new share for future backups and delete the old share containing the existing backups. If the protection processes mean this is not possible, the existing share can be used but the existing NAS overhead will only disappear when backups are expired.

On a general note: As backup appliances, the shares presented as backup targets are designed and optimized for sequential writes to new, large backup files.Overwrites to existing backup files create challenges for any in-line backup deduplication systems. The StoreOnce method to handle overwrites within exisitn files, is to store the overwrites in a separate non-deduplicated file on a separate area of disk. Using knowledge of the Veeam data format, StoreOnce software 3.6.2 StoreOnce minimizes the number (overhead) of these non-deduplicated files. However, even using StoreOnce software 3.6.2, there will be some overhead as Veeam updates metadata within previously written backup files.

The performance of the StoreOnce system can be optimized by choosing the most ‘friendly’ Veeam backup mode. Reverse Incremental backups and Synthetic Full backups cause more changes in the existing backup files. A consequence of this is more overwrites and an increase in the overhead. Forward Incremental backups are more ‘friendly’. It is recommended to look at the StoreOnce best practices published on http://www.hp.com/go/StoreOnce.”
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Ivan239 and 229 guests