-
- Enthusiast
- Posts: 91
- Liked: 10 times
- Joined: Aug 30, 2013 8:25 pm
- Contact:
Is Backup Copy terribly designed or am I using it wrong.
Having using the new backup copy jobs for more than half a year now, it seems to me that it's either terribly designed or I'm using it wrong.
Here is what I am expecting backup copy jobs to do:
1) Long term retention
2) copied to targets that are able to deduplicate
Now, reading through the Veeam documentation, it seems like this is exactly is recommended as well. The problem with that is the design doesn't align with the vision. Deduplication targets generally have:
1) slow disks
2) large capacity
3) low I/O
With this in mind, why does Veeam store the "daily retention" backup copy points as a reverse incremental chain. Reverse incremental chain is incredibly IO intensive which goes against the entire design vision. Furthermore, you are forced to use "daily retention" points in your backup copy jobs even if you only want monthlys/yearlies, etc. Is it not possible to just "robocopy" the restore points from the original backup job to the dedupe appliance instead of trying to build a reverse incremental daily chain?
I'm sure it may work for smaller backups, but for huge companies like us who have terabytes of incremental data being backed up each day, this makes it impossible to use backup copy jobs. Perhaps I'm doing it wrong but after going through the documentation it's very unlikely.
This is what we're trying to do:
7 days backup job stored on local disk (non-deduped, backup job)
52 weeklies stored on dedupe appliance (HP Storeonce 4500, backup copy job)
7 yearlies stored on dedupe appliance (HP Storeonce 4500, backup copy job)
I actually don't want any dailies stored on a backup copy job, but I understand where other people may have use cases for it. Why is it not possible to have the backup job as a reverse incremental chain, and then a full backup copied onto the dedupe appliance as a daily backup instead of a high IO intensive chain being made on the backup copy job target?
Here is what I am expecting backup copy jobs to do:
1) Long term retention
2) copied to targets that are able to deduplicate
Now, reading through the Veeam documentation, it seems like this is exactly is recommended as well. The problem with that is the design doesn't align with the vision. Deduplication targets generally have:
1) slow disks
2) large capacity
3) low I/O
With this in mind, why does Veeam store the "daily retention" backup copy points as a reverse incremental chain. Reverse incremental chain is incredibly IO intensive which goes against the entire design vision. Furthermore, you are forced to use "daily retention" points in your backup copy jobs even if you only want monthlys/yearlies, etc. Is it not possible to just "robocopy" the restore points from the original backup job to the dedupe appliance instead of trying to build a reverse incremental daily chain?
I'm sure it may work for smaller backups, but for huge companies like us who have terabytes of incremental data being backed up each day, this makes it impossible to use backup copy jobs. Perhaps I'm doing it wrong but after going through the documentation it's very unlikely.
This is what we're trying to do:
7 days backup job stored on local disk (non-deduped, backup job)
52 weeklies stored on dedupe appliance (HP Storeonce 4500, backup copy job)
7 yearlies stored on dedupe appliance (HP Storeonce 4500, backup copy job)
I actually don't want any dailies stored on a backup copy job, but I understand where other people may have use cases for it. Why is it not possible to have the backup job as a reverse incremental chain, and then a full backup copied onto the dedupe appliance as a daily backup instead of a high IO intensive chain being made on the backup copy job target?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
It does not. We store "daily retention" as a forward incremental chain specifically for the reason you mention. You are right - reverse incremental has 50% I/O overhead compared to forward incremental. That would indeed be a terrible choice for our cause.luckyinfil wrote:why does Veeam store the "daily retention" backup copy points as a reverse incremental chain. Reverse incremental chain is incredibly IO intensive which goes against the entire design vision.
We are working on adding this option into the next release specifically to better support deduplicating storage appliances we don't have integration with. We do realize that this option is essential - even though we already integrate with Data Domain and ExaGrid (and working on StoreOnce integration), we will never be able to integrate with every single dedupe appliance out there.luckyinfil wrote:Why is it not possible to have... a full backup copied onto the dedupe appliance as a daily backup instead of a high IO intensive chain being made on the backup copy job target?
It is important to realize the drawback of the full backup copy approach though: it is suitable for local copies only (you cannot do daily fulls over WAN). Which should in turn answer your main question of why Backup Copy jobs are designed the way they are. They were designed with off-site copies in mind first and foremost.
-
- Enthusiast
- Posts: 91
- Liked: 10 times
- Joined: Aug 30, 2013 8:25 pm
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
I got the type of chain mixed up, but regardless it's still a chain which requires daily merging which requires a large amount of I/O. This makes it impossible for any dedupe appliance without some sort of direct integration (ie: EMC DD Boost) with Veeam to function properly.Gostev wrote: It does not. We store "daily retention" as a forward incremental chain specifically for the reason you mention. You are right - reverse incremental has 50% I/O overhead compared to forward incremental. That would indeed be a terrible choice for our cause.
Can you give more info regarding the new option regarding in the next release? Would it be possible to eliminate the creation of ANY chain on a dedupe appliance for backup copy jobs?Gostev wrote: We are working on adding this option into the next release specifically to better support deduplicating storage appliances we don't have integration with. We do realize that this option is essential - even though we already integrate with Data Domain and ExaGrid (and working on StoreOnce integration), we will never be able to integrate with every single dedupe appliance out there.
It is important to realize the drawback of the full backup copy approach though: it is suitable for local copies only (you cannot do daily fulls over WAN). Which should in turn answer your main question of why Backup Copy jobs are designed the way they are. They were designed with off-site copies in mind first and foremost.
Regarding the way the BCJ are designed with off-site in mind, that makes no sense to me. Good practices dictate that you need both a copy of the backups onsite and offsite. As BCJ are designed for long term retention, either way, you will still need to have a copy on site. Most backup deduplication appliances generally have a replication feature that allows for efficient backup from main to offsite, so why is Veeam trying to reinvent the wheel.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
Why would you like to have two copies of the same backup in the same place? If the location has issues you loose both copies.
The 3-2-1 rule we often refers to talks about 3 copies where the first one is the production data, and then you have 2 additional copies in the backups: one onsite and the other offsite.
The 3-2-1 rule we often refers to talks about 3 copies where the first one is the production data, and then you have 2 additional copies in the backups: one onsite and the other offsite.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
That is the correct statement. Although as I noted, HP Catalyst integration is also in the works.luckyinfil wrote:This makes it impossible for any dedupe appliance without some sort of direct integration (ie: EMC DD Boost) with Veeam to function properly.
That is correct. Enabling this option completely eliminates ANY chain transformations leveraging local data on appliance. And, Backup Copy jobs become 100% sequential write workload.luckyinfil wrote:Can you give more info regarding the new option regarding in the next release? Would it be possible to eliminate the creation of ANY chain on a dedupe appliance for backup copy jobs?
3-2-1 requires 2 copies of backups with 1 copy being offsite. Since vast majority of customers go with the minimum possible amount of copies due to the storage costs, the second copy almost always will be in the offsite repository. Accordingly, BCJ was designed with this primary use case in mind.luckyinfil wrote:Regarding the way the BCJ are designed with off-site in mind, that makes no sense to me. Good practices dictate that you need both a copy of the backups onsite and offsite.
That is not to say that you are doing it wrong and wasted money for an additional on-site copy. There is nothing wrong about increasing your recovery chances, it is just that few companies can afford that.
-
- Enthusiast
- Posts: 91
- Liked: 10 times
- Joined: Aug 30, 2013 8:25 pm
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
Thanks for expanding. Any ETA on the "no backup chain feature" or HP catalyst integration?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
New Backup Copy option is already implemented in the v9 code branch. HP Catalyst is in the works, so harder to predict at the moment (especially since this integration will require an updated firmware from HP, which is something on its own timeline).
-
- Enthusiast
- Posts: 91
- Liked: 10 times
- Joined: Aug 30, 2013 8:25 pm
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
Any ETA on v9?Gostev wrote:New Backup Copy option is already implemented in the v9 code branch. HP Catalyst is in the works, so harder to predict at the moment (especially since this integration will require an updated firmware from HP, which is something on its own timeline).
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
No ETA currently.
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
This year though.
-
- Veteran
- Posts: 465
- Liked: 136 times
- Joined: Jul 16, 2015 1:31 pm
- Full Name: Marc K
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
I know this is an old thread and that an improvement may be soon available in version 9, but I wanted to address this question as I have run into this issue too.dellock6 wrote:Why would you like to have two copies of the same backup in the same place?
I currently back up to a deduplicating appliance that automatically replicates to an off-site locaiton. So, one Veeam backup job is all that is needed to satisfy the 3-2-1 rule. The reason I turned to backup copy jobs is because regular backup jobs to not allow for GFS retention.
It's not that I want or need to have two copies of the same backup in the same place. It's that I have to do that in order to gain access to the retention options I'm interested in.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
I believe there might be some flaws in the described scenario.
First, dedupe device that is suited mostly for long-term archival and that has some performance penalties is being used as primary backup target. Second, storage replication is not a real replacement of backup copy job, as soon as corrupted block happens and the primary node and it will be automatically replicated to the secondary node.
So, in my opinion, it might be better to
- point a backup job to some locally attached disks (short retention)
- create surebackup job to validate the resulting data
- create a backup copy job pointed to dedupe device (long retention)
- storage replication between dedupe devices
Thanks.
First, dedupe device that is suited mostly for long-term archival and that has some performance penalties is being used as primary backup target. Second, storage replication is not a real replacement of backup copy job, as soon as corrupted block happens and the primary node and it will be automatically replicated to the secondary node.
So, in my opinion, it might be better to
- point a backup job to some locally attached disks (short retention)
- create surebackup job to validate the resulting data
- create a backup copy job pointed to dedupe device (long retention)
- storage replication between dedupe devices
Thanks.
-
- Veteran
- Posts: 465
- Liked: 136 times
- Joined: Jul 16, 2015 1:31 pm
- Full Name: Marc K
- Contact:
Re: Is Backup Copy terribly designed or am I using it wrong.
We are actually using ExaGrid appliances. With ExaGrid, the data in the landing zone is left in non-dedup form. So, the performance problem for short term restores isn't really there. In this configuration, it's actually similar to what you describe of using separate storage for short and long term retention.
I generally agree with the statement that storage replication is not backup, but it's not really an apples to apples comparison in this case. The typical misuse of storage replication is to replicate the primary copy of data to a secondary site and consider that backup. In that case, corruption can cause substantial data loss.
With backup its a little different. When I back up to an ExaGrid with Veeam, I am either doing regular active fulls, synthetic fulls and/or running SureBackup jobs. I'd expect SureBackup jobs to detect corruption and having regular full backups means any corruption that occurs can only affect a small portion of the backup chain.
You do bring up a good point about BCJs performing verification. That is a benefit that is easy to overlook.
I generally agree with the statement that storage replication is not backup, but it's not really an apples to apples comparison in this case. The typical misuse of storage replication is to replicate the primary copy of data to a secondary site and consider that backup. In that case, corruption can cause substantial data loss.
With backup its a little different. When I back up to an ExaGrid with Veeam, I am either doing regular active fulls, synthetic fulls and/or running SureBackup jobs. I'd expect SureBackup jobs to detect corruption and having regular full backups means any corruption that occurs can only affect a small portion of the backup chain.
You do bring up a good point about BCJs performing verification. That is a benefit that is easy to overlook.
Who is online
Users browsing this forum: Bing [Bot] and 164 guests