Comprehensive data protection for all workloads
Post Reply
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Backup Copy Job Issues/Limitations

Post by Lewpy »

I am working on a Veeam backup solution (v7 R2) at the moment for VMware, which uses a physical Veeam server (Direct SAN) connected to a pair of replicated EMC DataDomain units (via Linux NFS proxy, although also have CIFS Proxy available). The Linux NFS Proxy is virtual and connected to the DataDomain via 10Gbps networking, as is the CIFS Proxy. All compression and deduplication is disabled for the DataDomain repositories, and block alignment enabled.
I have a whole bunch of main backup jobs, and I then have about 10 backup copy jobs for GFS rotation/retention. These copy from the DataDomain back to the DataDomain, which is not an ideal way to manage the data but the only built-in way in Veeam for long-term data retention.

Everything is generally going well :D
Although there are a few limitations/issues I've hit :(

1. "Backup File Verification" within the Backup Copy Jobs does not appear to "obey" the Repository limits for concurrent processing. I currently have my repository set to a limit of 4 concurrent jobs, yet there are 3 jobs running "backup file verification" activities and I've had 4 other jobs running copies and transformations at the same time. Surely the "backup file verification" activity should be taken in to account, as it is applying loading to the repository (both data storage and repository proxy)?

2. "Backup File Verification" does not show any activity/progress in the GUI: the job just says "Idle", when in fact it is processing GB's of data. All other "Backup Copy Job" activities show some kind of progress, whether it is copying the data or transforming the restore points.

3. When dealing with just a single backup repository device (in my case the DataDomain, but could be any other device), it doesn't make sense to have to copy the backup data back and forth to the same unit. It would be nice if Veeam offered the Retention options of a Backup Copy Job within the normal Backup Job, so that periodic full backups were "left behind" in the normal backup area. It appears that the current philosophy is built around different tiers of storage, and the Backup Copy job moves the backups between tiers, but this isn't always going to be the case, and having to copy TB's of data back and forth seems unnecessary and slow. It would be even better if the Backup Retention could work with Full Backups (if made available to standard Backup jobs), rather than just using synthetic transformations: I am taking monthly Full Backups, and if these were retained (with just the increments deleted), then a GFS rotation would be possible with zero data transfer or transformation.

4. I have an issue with a few virtual machines where some of the virtual hard drives were moved to new virtual SCSI controllers for virtualisation optimisations (so from 0:3 to 1:0, for example). The subsequent Veeam backup created a very large increment backup, which was expected, however the backups are actually of no use and cannot be opened as Veeam doesn't appear to be able to link the moved hard drives in the increment backup back to the original hard drives in the last full backup. This is fairly easy to work around, and I've taken a Full Backup of each affected VM and will just accept that a weeks worth of backups are unusable. However, it would have been nice if an error was generated, or a Full backup cycle was automatically triggered if required.

5. Following on from the issue above with moved virtual hard drives, I have one VM which has a Backup Copy Job. The Backup Copy Job has been working fine since the alterations to the virtual hard drives occurred about 10 days ago and it even ran a Backup File Verification two days ago with no errors, but as soon as it reached the point where it tried to make a transformation restore point dated after the alterations were performed, it fails with an error "OverbuildIncrement failed". It appears that it too can't tally the increment file (VIB) made after the alterations with the full backup (VBK) made before the alterations, so my Backup Copy "chain" is broken. I have tried to work around this by taking a Full Backup of the VM in the normal backup job (as I needed to anyway [see above]), but the Backup Copy Job converts the full backup (VBK) in to another increment backup (VIB), so the backup chain will still be broken.

How can I fix the Backup Copy Job, so that it resumes? Really, I want it to copy across a fresh full backup from the main Backup repository, but I don't think I can because it is always trying to do synthetic backups so works purely with incremental backups. Even if I could delete the current incremental files, I believe it would just try to create another incremental backup file which again would not work properly with the last full backup file.
The only "simple" solution I can see is to purge the current Backup Copy Job data, and start again. However, that then deletes my current retention points. This is not too much of an issue at the moment, as it has only been running for a month, and has about 4 retention points (1 monthly, 3 weekly), but if this were to occur on 12 months time, then I wouldn't want to have to delete all the previous years retention points, as that defeats the whole purpose of the mechanism.
So I currently believe I will need to disable the Backup Copy Job, rename the Backup Copy Job folder to something else, and then re-enable the Backup Copy Job. Hopefully, it should then copy the latest full backup in to a new Backup Copy Job folder, starting a new synthetic chain. I should then be able to re-import the old Backup Copy Jobs from the renamed folder, and then manually delete the recovery points when they have "expired". This is somewhat annoying as an option though :(

Any assistance/insights with this would be greatly appreciated :)

Lewis.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev » 1 person likes this post

1. I will check on that with the QC.
2. This is being addressed.
3. It is not so much about the storage tiers, but rather the requirement of having at least two independent copies of backups (copies within the same or replicated storage do not count for separate). And as long as you have two separate storage devices, the tiered approach makes sense (fast, small primary storage - and slower, big secondary storage).
4. I will check on that with the QC as well.
5. This needs to be investigated throught support.

We investigate the possibility of enabling the seeding from backup chains which have incremental backup. Right now it requires the full backup, you are right.
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Hi Anton,

Thanks for making it through my post and replying :)
I understand what you mean with multiple copies of backups (3-2-1 Backup Rule), and it would be ideal to be able to follow this in all situations, but most of the time there is a compromise somewhere, and it would be nice if Veeam B&R was flexible enough in its approach that it could fit more scenarios. My main point is that all of the engineering time to develop the GFS rotation process is done, it would just be nice to be able to use it in other ways.

I ended up stopping/disabling the Backup Copy Job, renaming the Backup Copy Job folder in the target repository, and then starting/enabling the Backup Copy Job again.
It then copied a complete new full backup across, and is currently doing the subsequent incremental as I type. Hopefully all transformations from now work okay.
I have yet to rescan the repository to try and "recover" the old restore points in the renamed folder, as I am waiting for all repository activity to stop: I have one final "Backup File Verification" task running, currently at 61 hours for a 2.5TB individual VM backup :? Although there were several other "Backup File Verifications" running for about 48 hours, so the process has no doubt been drawn out.

Another thing I have just noticed is that Backup Copy Job retention/transformation seems to leave behind the "empty" 16.5MB VIB files that are created when no actual restore point is copied within a copy interval. They get removed from the Backup Properties window in the GUI, however they are left on the disk in the repository. I assume this shouldn't be the case? Is it safe to manually delete them, or should I leave them alone?

Lewis.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev » 1 person likes this post

Hi Lewpy, Happy New Year!
Lewpy wrote:My main point is that all of the engineering time to develop the GFS rotation process is done, it would just be nice to be able to use it in other ways.
We certainly could, but this will enable users to deploy the product in the incorrect manner, with a single backup copy maintained. And our worst support cases come from when users loses their only copy of backups for whatever reason. Because virtually no one reads documentation and recommendations, we have to "push" the users to the correct approach with the actual UI.
Lewpy wrote:I have one final "Backup File Verification" task running, currently at 61 hours for a 2.5TB individual VM backup :? Although there were several other "Backup File Verifications" running for about 48 hours, so the process has no doubt been drawn out.
All Backup File Verification process does is reads the file's content, so this is quite slow... perhaps because of DataDomain.
Lewpy wrote:Another thing I have just noticed is that Backup Copy Job retention/transformation seems to leave behind the "empty" 16.5MB VIB files that are created when no actual restore point is copied within a copy interval. They get removed from the Backup Properties window in the GUI, however they are left on the disk in the repository. I assume this shouldn't be the case? Is it safe to manually delete them, or should I leave them alone?
Empty VIB is for the intervals where no data was copied. They will be removed by retention automatically, as the full backup transformation process gets to the corresponding restore points.
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Gostev wrote:Hi Lewpy, Happy New Year!
And to you too :) +1 Dedication Point for replying on New Years Day!
Gostev wrote:We certainly could, but this will enable users to easily deploy the product in the incorrect manner, with a single backup copy maintained. And our worst support cases come from when users loses their only copy of backups for whatever reason. Because virtually no one reads documentation and recommendations, we have to "push" the users to the correct approach with the actual UI.
While I can understand your desire to "push" everyone to a standard model of backups, I think you will frustrate a proportion of users with such a rigid stance. Not everyone will have the budget to implement the required backup topology that is necessary, and they may be willing to trade cost for risk to achieve something they can afford: this is a common equation in IT. By all means "guide" people to the best solution, but to enforce it narrows the options and could be detrimental when deciding what system to implement.
This also extends to Tape Copy Jobs: enforcing that all backups have to be copied to tape narrows the usability of the Tape Copy function. I want to be able to archive particular restore points to tape (not every single one), which I do not believe I will be able to do, unless I "bend" the way you have structured things and do a Backup Copy Job to disk of the Restore Point I want to copy to Tape, and then do a Tape Copy from that job. Having to throw TB of data back and forth from a repository just to convince the software to copy what I want to tape is frustrating. But if I can't do this directly with the software, it wouldn't make me not want to do it because I am not "allowed" to, it would just make me use another piece of software to do what I wanted to do: "you can lead a horse to water, but you cannot make him drink" :)
Personally, I would love to do everything I want to do within Veeam B&R natively (and efficiently!) without using other software, as had to be the case prior to v7.
Gostev wrote:Empty VIB is for the intervals where no data was copied. They will be removed by retention automatically, as the full backup transformation process gets to the corresponding restore points.
The issue is that they are not being removed by retention automatically, whereas actual VIB files for intervals that did copy data are being deleted correctly. I currently have 4 or 5 of these "small" VIB files left across several backup copy jobs, which should have been "cleaned up" .
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev » 1 person likes this post

OK, please open a support case for the VIB issue then, as this is not expected and needs to be troubleshoot further.

Tape support functionality will certainly be enhanced in future versions based on the feedback, right now we have "v1" version of tape support anyway.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev »

Lewpy wrote:While I can understand your desire to "push" everyone to a standard model of backups, I think you will frustrate a proportion of users with such a rigid stance. Not everyone will have the budget to implement the required backup topology that is necessary, and they may be willing to trade cost for risk to achieve something they can afford: this is a common equation in IT. By all means "guide" people to the best solution, but to enforce it narrows the options and could be detrimental when deciding what system to implement.
But here is the thing: it costs almost nothing to do it the right way... at least with Veeam. Just take any unused or under-used server, put a few hard drives in it, and you will get an excellent primary backup repository for holding a handful of recent restore points. This will both reduce your backup window dramatically as you are no longer writing direct to slower DataDomain box, but rather fast raw storage - and enable you to meet the 2 backup copies requirement!

To me, there is just no excuse not to do it right... even in case of new infrastructure, I just cannot believe anyone buying Veeam licenses cannot afford most basic Windows or Linux server with a few hard drives in it? Especially the proud DataDomain owners ;)
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Gostev wrote:But here is the thing: it costs almost nothing to do it the right way... at least with Veeam. Just take any unused or under-used server, put a few hard drives in it, and you will get an excellent primary backup repository for holding a handful of recent restore points. This will both reduce your backup window dramatically as you are no longer writing direct to slower DataDomain box, but rather fast raw storage - and enable you to meet the 2 backup copies requirement!

To me, there is just no excuse not to do it right... even in case of new infrastructure, I just cannot believe anyone buying Veeam licenses cannot afford most basic Windows or Linux server with a few hard drives in it? Especially the proud DataDomain owners ;)
Let me use the example I am currently working on.
The project was already set before Veeam v7 was released, so costs/budgets/finance were already in place.
An old server was reused for the physical Veeam Backup Server, but it is a 1U server (actually an HP DL360 G7) which has limited options for internal hard drives, and there was only a pair of 300GB SAS drives (reused ESX server). Older servers were out of warranty (and extended warranty in some cases), so were not an option.
Given we are backing up nearly 20TB of VMs, we would need a fairly substantial amount of local storage. I suspect 12TB+, to be able to hold 2 full dedup/compressed VBK files (as you need to write a new one before deleting the old one) plus incremental files.
That means we would need to [basically] buy an additional external disk array and NL-SAS drives. Maybe 11 x 2TB for 8+2 RAID6 (plus H/S, optional). And buy the RAID6 advanced licence for the RAID controller.
So it is not "almost nothing", it is most definitely "some" cost.

In the future? Yes, I would be recommending this. But that doesn't mean that some customer will say "thanks, but I'll save the £10k & not buy the local storage and just store it all in one repository". I can advise them against it, but I can't force them to do it.

Actually, I get good [read: adequate for purpose] write-speed in to the DataDomain, somewhere between 200-300MB/s via a 10Gb NFS proxy server, as most data is inline deduplicated to nothing, so only a small amount actually gets written to disk.
The bottleneck is actually the network from physical Veeam server (not 10Gb, and no IO slot free for 10Gb) to NFS Proxy, although I am enabling dedup/compression on the job to maximise inter-proxy bandwidth.
I experimented with hotadd vs. Direct SAN, and for incrementals the Direct SAN was faster due to lack of overhead of the hotadd method (something like 1 minute to mount and another 1 minute to dismount each virtual hard drive). For full backups it was marginal and as this was happening over a weekend timing wasn't so critical.
The main issue is the trashing of data back and forth to the DataDomain while doing the Backup Copy Jobs.
yizhar
Service Provider
Posts: 181
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Backup Copy Job Issues/Limitations

Post by yizhar »

Hi.

If the backup copy job GFS doesn't fit well in your environment for whatever reason, you can consider keeping backups the old way, for example:

Daily backup jobs with forward incremental + weekly active full + keeping 30 (or more?) latest restore points.
This seems to me like a good way to use the datadomain repository.
No synthetic full - no load on the system.

Additional (and optional) weekly job with reverse incremental keeping latest 12 weeks.

And for archiving monthly backups - a robocopy or other file copy process that will copy the latest VBK from daily backups to a different location.
This can also be a manual process that you do yourself every first Monday of the month - use a search utility to locate VBK files from last week, and copy them elsewhere for archiving.

Yizhar
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Hi Yizhar,

Yeah, I have the idea of using two separate backup schedules per VM as a fall-back plan.
However, the Backup Copy Jobs are working, its just a big usage of resource. Also, having the Veeam server working such long jobs (3 days, in some cases) does limit the options for maintenance windows, although I am sure it won't end up working 24/7 and will just be busy at certain times of the week/month.
Reverse-incrementals are not ideal either, due to the read/write workload. I would probably schedule monthly backups in such a way that only full backups are ever done, and then retained for a year, so no backup chains as such.

Lewis.
KevinK
Enthusiast
Posts: 28
Liked: 10 times
Joined: Apr 24, 2013 9:18 am
Full Name: Kevin Kissack
Contact:

Re: Backup Copy Job Issues/Limitations

Post by KevinK » 2 people like this post

Hi Lewpy,

I believe I can help with your Data Domain query.

We have the exact same problem here - no retention system built into Veeam and reading is poor from the Data Domains.

The setup;

Each of the regional offices has its own Veeam backup installation and Data Domain - depending on the size of the infrastructure/office we have a mix of physical and virtual backup servers
There are four separate jobs for each datastore (datastorename_daily, datastorename_weekly, datastorename_monthly, datastorename_yearly)
Daily full run monday, with incremenal Tuesday to Friday - 10 restore points
Weekly full run on Saturday - 8 restore points
Monthly full run on last Sunday of the month - 24 restore points
Yearly full run on last Sunday of the year - 2 restore points

I have setup 3 MTrees on each regional Data Domain - one for each job type - weekly, monthly, yearly that needs replicating
Each MTree has a CIFS share and replication context configured to replicate back to a central Data Domain (for archive/DR)

The end result is backup data in the regional office and a copy located in another location should the office/infrastructure go pop.

Ok, this doesn't protect from damage to the source data which is then instantly replicated to the archive.

Below is a command I've been using on the destination DD to move data out of a replication MTree and into another directory for safe keeping;

filesys fastcopy source /data/col1/xxxdd0x_yearly destination /data/col1/yearly_archive/2013/countrycode

There only draw backs to working in this manner is that Veeam is no longer aware of the data and the FastCopy command overwrites the target (there is no append - confirmed with EMC).

I hope this comes in useful.

Cheers

Kevin
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Hi Kevin,

Thank you for your detailed reply :)
I've been monitoring my Backup Copy jobs for the past week, and I'm now at the point of stopping using them as they are just unpractical for large backup jobs when used with Data Domain :(
The biggest issue is the time it takes for full synthetic restore points (so where a new, complete image of the backup job is created from an existing complete backup and incremental), as this can take days to perform on some of the larger backups (1.8-2.5TB in size). It works quite smoothly for smaller backup jobs (up to 200GB). But for the Veeam server to be busy practically 24/7 processing Backup Jobs and Backup Copy Jobs makes it difficult to perform even simple maintenance tasks: I was waiting for over a week to find a time when all backups were stopped so I could patch Veeam B&R R2 to R2a :(
This would still be an issue if the initial backups went to fast local storage, and the Data Domain was used as only the Backup Copy job destination: the synthetic functions are done only against the archive repository.
I believe that if Veeam supported "DD Boost" then [theoretically] the synthetic backup jobs would be massively transformed, as the Repository Proxy would not have to thrash the Data Domain box as it would know that the data was already "segments" within the Data Domain. I also have read Veeam's comments about implementing vendor technologies, so I know that it needs "co-operation" both ways :wink:

I am going to look at a similar setup to yourself, I believe, although I may investigate scripting to "clone" a backup job and create the weekly/monthly/yearly backups from a given daily backup job.

Lewis.
KevinK
Enthusiast
Posts: 28
Liked: 10 times
Joined: Apr 24, 2013 9:18 am
Full Name: Kevin Kissack
Contact:

Re: Backup Copy Job Issues/Limitations

Post by KevinK »

Let me know how you get on.

From the latest Digest
The first release of this year is beta code for enhanced support of rotated hard drives: something I promised in the previous digest a few weeks ago. One of our QC guys actually worked on holidays testing this package, so kudos for that. This new code adds support for rotated media with all job types, and removes the need for ANY scripting on your side completely (or so we hope). We are looking to include this into the next product patch, but please try it out right now and let us know if we missed anything, so that the final implementation is solid. Here is the complete installation and usage instructions, and here is the direct download link. Please test to make sure this covers all your use cases around removable media.
veremin
Product Manager
Posts: 20270
Liked: 2252 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Backup Copy Job Issues/Limitations

Post by veremin »

Hi, Kevin, for more information regarding this functionality, kindly, see adjacent thread. Thanks.
NightBird
Expert
Posts: 242
Liked: 57 times
Joined: Apr 28, 2009 8:33 am
Location: Strasbourg, FRANCE
Contact:

Re: Backup Copy Job Issues/Limitations

Post by NightBird »

Lewpy wrote:Hi Anton,

Thanks for making it through my post and replying :)
I understand what you mean with multiple copies of backups (3-2-1 Backup Rule), and it would be ideal to be able to follow this in all situations, but most of the time there is a compromise somewhere, and it would be nice if Veeam B&R was flexible enough in its approach that it could fit more scenarios. My main point is that all of the engineering time to develop the GFS rotation process is done, it would just be nice to be able to use it in other ways.

I ended up stopping/disabling the Backup Copy Job, renaming the Backup Copy Job folder in the target repository, and then starting/enabling the Backup Copy Job again.
It then copied a complete new full backup across, and is currently doing the subsequent incremental as I type. Hopefully all transformations from now work okay.
I have yet to rescan the repository to try and "recover" the old restore points in the renamed folder, as I am waiting for all repository activity to stop: I have one final "Backup File Verification" task running, currently at 61 hours for a 2.5TB individual VM backup :? Although there were several other "Backup File Verifications" running for about 48 hours, so the process has no doubt been drawn out.

Another thing I have just noticed is that Backup Copy Job retention/transformation seems to leave behind the "empty" 16.5MB VIB files that are created when no actual restore point is copied within a copy interval. They get removed from the Backup Properties window in the GUI, however they are left on the disk in the repository. I assume this shouldn't be the case? Is it safe to manually delete them, or should I leave them alone?

Lewis.
Hi Lewis, why did you enable "block alignment" on your DATADOMAIN repo ? I'm testing a DD620 and I don't enable block alignment (DD is a variable lengh dedup appliance no ?)

What kind of DD do you have ?

Thx for your answer
Boris
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Hi Boris,
NightBird wrote: Hi Lewis, why did you enable "block alignment" on your DATADOMAIN repo ? I'm testing a DD620 and I don't enable block alignment (DD is a variable lengh dedup appliance no ?)
What kind of DD do you have ?
EMC/Data Domain have an integration guide for Veeam (v6) on the Data Domain website, and it states on page 10
EMC/Data Domain wrote:For both CIFS and NFS, it is important to properly configure the backup repository within Veeam Backup & Replication so that it is optimized for a deduplication target such as the Data Domain controller. Refer to the appropriate Veeam User Guide for specific steps. In all cases, the two advanced options for a repository must be selected as shown in Figure 6:
Specifically, the Align backup file data blocks and Decompress backup data blocks before storing options should be selected.
It's all about helping the DD to optimise the performance, I assume. By having a consistent start of a block aids the appliances, even if the length of dedup is variable.
I was working on a DD2500, but it is relevant to all models I believe.

Lewis.
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Hi Kevin,
KevinK wrote:Let me know how you get on.
Immediately after I wrote my previous post, I took the decision to stop using Backup Copy Jobs and do the fall-back plan of secondary monthly full backups.
The two backup jobs are scheduled to not overlap/compete, so hopefully this should cause no issues.

Lewis.
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev »

Lewpy wrote:EMC/Data Domain have an integration guide for Veeam (v6) on the Data Domain website, and it states on page 10
Lewpy wrote:All compression and deduplication is disabled for the DataDomain repositories, and block alignment enabled.
With compression disabled, block alignment setting has no impact anyway. It only makes difference when compression is enabled. I believe EMC also recommends disabling compression in the same guide, so their suggestion of enabling block alignment does not make technical sense.
NightBird
Expert
Posts: 242
Liked: 57 times
Joined: Apr 28, 2009 8:33 am
Location: Strasbourg, FRANCE
Contact:

Re: Backup Copy Job Issues/Limitations

Post by NightBird »

If we use Dedup Friendly compression ? should we enable block alignment ?
Lewpy
Enthusiast
Posts: 66
Liked: 15 times
Joined: Nov 27, 2012 1:00 pm
Full Name: Lewis Berrie
Location: Southern England
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Lewpy »

Gostev wrote:With compression disabled, block alignment setting has no impact anyway. It only makes difference when compression is enabled. I believe EMC also recommends disabling compression in the same guide, so their suggestion of enabling block alignment does not make technical sense.
But to be clear, it doesn't do any harm either: it just makes no difference (from what you are saying).
If disabling compression (which is surely what the "Decompress backup data blocks before storing" option does in the Advanced Repository Settings) automatically aligns the blocks (I guess as they are no longer variable length from compression?), then ticking that option should grey out the "Align backup file data blocks" if that option is no longer relevant? To me, as the end user, two independently selectable options makes me think they are just that: independent of each other. GUI "etiquette" (as I've known it) is to mask an option that is made redundant by selecting another option, as this helps indicate the dependencies between options to the user.
And don't forget, the document was written for v6 of Veeam, which did not have "Dedupe-friendly" compression as an option :wink:
Or does "Dedupe-friendly" compression perform block alignment by default, as well as some level of compression?
It would be interesting to see if the Veeam "Dedupe-friendly" compression performs better than the inbuilt [default] lz compression used by DD. I didn't try the highly levels of compression in the DD (gzfast or gz), although I had a fair amount of CPU left untouched (only peaked in the 10-20% CPU level, with the average lower than about 5% CPU).
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Backup Copy Job Issues/Limitations

Post by Gostev »

NightBird wrote:If we use Dedup Friendly compression ? should we enable block alignment ?
In our testing, enabling block alignment reduced dedupe ratio with variable block size dedupe storage, and increased dedupe ratio with constant block size dedupe storage. The type of compression does not matter.
Lewpy wrote:But to be clear, it doesn't do any harm either: it just makes no difference (from what you are saying).
Correct.
Lewpy wrote:And don't forget, the document was written for v6 of Veeam, which did not have "Dedupe-friendly" compression as an option :wink:
v6 did have this compression option, but it was named differently (Low) :D
Post Reply

Who is online

Users browsing this forum: No registered users and 180 guests