Comprehensive data protection for all workloads
Post Reply
ekisner
Expert
Posts: 203
Liked: 34 times
Joined: Jul 26, 2012 8:04 pm
Full Name: Erik Kisner
Contact:

Possible new feature? Moving Backup Chains

Post by ekisner »

So I'm in the process of migrating my backup chains to different storage. Unfortunately given the massive size of these chains, I must disable the job for at least a day while the copy runs. Murphy's Law states that this is when I will be needing the backups to be running!

It would be very interesting to have a built-in move option which tracks the locations of the individual files in the backup chains, allowing the copy to simply pause when a scheduled job run happens, then resume copying the chain when the job finishes.

I'm guessing the structure of each of the files in the chain references the next item in the chain (or it's just tracked in the DB, which does sound simpler), so changing the absolute path shouldn't be all that hard.

It would probably want to start with the VBK and then work it's way up/down through the VIBs.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Possible new feature? Moving Backup Chains

Post by PTide »

Hi,

What are the size and type of your backup chain?

Thanks
ekisner
Expert
Posts: 203
Liked: 34 times
Joined: Jul 26, 2012 8:04 pm
Full Name: Erik Kisner
Contact:

Re: Possible new feature? Moving Backup Chains

Post by ekisner »

The job is reverse incremental with separate chains per VM; total size is 6.8TB at the moment, covering 5 days worth of restore points. It backs up our file servers. The rest were already there, the purpose of the move was to put it on faster storage having added more disks to that array.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Possible new feature? Moving Backup Chains

Post by PTide »

You can temporarily switch to forever forward mode so the full backup won't be touched during the next incremental run. That should allow you to copy the whole reverse chain to the new repo during the backup session, just don't forget to increase the retention temporarily to avoid the oldest .vrb getting retained. After you've copied the chain switch the job back to reverse mode and map it to the chain. However, this approach has not been tested and quite complicated, so I'd recommend you to stay with forever forward schema in this case. Also forever forward is much less IO intensive than reverse so you will benefit in terms of performance.

Thank you
mkretzer
Veeam Legend
Posts: 1203
Liked: 417 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Possible new feature? Moving Backup Chains

Post by mkretzer »

I also would find a move feature very usefull.
In theory since the files could be copied first (like with a VMotion) backup do not even have to stop to do this if its done right. Especially with SOBR this would be usefull!
ekisner
Expert
Posts: 203
Liked: 34 times
Joined: Jul 26, 2012 8:04 pm
Full Name: Erik Kisner
Contact:

Re: Possible new feature? Moving Backup Chains

Post by ekisner »

Well, we don't move our backups often - in this case, I moved the backup to faster storage after adding additional disks to the faster array. Now that it's there, I don't anticipate any further moves. The complexity isn't so much an issue, it does sound fairly straightforward. If I do need to move things again, I will certainly give that one a try.

With regard to the IO load, we throttle our backups already so the production storage does not get hit too hard; the DR storage rarely reaches saturation.
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: Possible new feature? Moving Backup Chains

Post by skrause »

ekisner wrote:Well, we don't move our backups often - in this case, I moved the backup to faster storage after adding additional disks to the faster array. Now that it's there, I don't anticipate any further moves. The complexity isn't so much an issue, it does sound fairly straightforward. If I do need to move things again, I will certainly give that one a try.

With regard to the IO load, we throttle our backups already so the production storage does not get hit too hard; the DR storage rarely reaches saturation.
Forever forward will greatly reduce the amount of time the snapshot is open, so you may want to look at it for that reason as well.
Steve Krause
Veeam Certified Architect
ekisner
Expert
Posts: 203
Liked: 34 times
Joined: Jul 26, 2012 8:04 pm
Full Name: Erik Kisner
Contact:

Re: Possible new feature? Moving Backup Chains

Post by ekisner »

That is a fair point. We presently use reverse for the convenience of restoring data, as I'm pretty much constantly mounting restore points. As a peace-of-mind process, I have an automated script which mounts a restore point (the least recently checked), and launches an AV scan on the FLR volume, then cleans up. Gives me an offline scan that I know can prevent any form of active AV countermeasures from preventing detection. Certainly increases the load on the DR array but like I said it's rarely saturated. The AV of course cannot remediate an infection, but it can alert me so that I can remediate the production server.

I'm not 100% sure how it works on the backend with forwards, whether or not it needs to generate a synthetic full point for the restore or whether it just "handles it" or not. I am currently of the understanding that restore-related tasks start faster with a reverse incremental job.
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Possible new feature? Moving Backup Chains

Post by PTide »

I have an automated script which mounts a restore point (the least recently checked), and launches an AV scan on the FLR volume, then cleans up. Gives me an offline scan that I know can prevent any form of active AV countermeasures<...>
Wow, that's impressive! Would you provide some numbers on how fast does the scan happen minutes/GB/amount of files on average?
With regard to the IO load, we throttle our backups already so the production storage does not get hit too hard; the DR storage rarely reaches saturation
Oh, I see. I thought that you're not satisfied with the backup job performance due because you said this:
The rest were already there, the purpose of the move was to put it on faster storage having added more disks to that array.
I'm not 100% sure how it works on the backend with forwards, whether or not it needs to generate a synthetic full point for the restore or whether it just "handles it" or not.
It just "handles it" - Veeam pulls all required blocks that constitute the restore point.
I am currently of the understanding that restore-related tasks start faster with a reverse incremental job.
If you restore the most recent point then yes, it will go without extra processing.

Thanks
ekisner
Expert
Posts: 203
Liked: 34 times
Joined: Jul 26, 2012 8:04 pm
Full Name: Erik Kisner
Contact:

Re: Possible new feature? Moving Backup Chains

Post by ekisner »

Would you provide some numbers on how fast does the scan happen minutes/GB/amount of files on average?
It's not super-fast, but it's all a function of how much processing power you can throw at it. It's running on old hardware, so it generally only runs at about 15MB/s on the storage side of things. Passable, but it certainly means that in the case of the file servers, it can certainly get in the way. I get around this by simply using lots of file servers (8 of them with never more than 1TB of space allocated, if anyone's counting), so that each one scans reasonably fast. Generally works out to about 150 files per second, but loads will make this vary.

I am debating on setting up "AV proxies" out of old hardware, then building the script to farm out to them, as it's definitely a processing power limitation at this point. Although in concept it might be compression slowing things down too. Haven't look at it much to be honest.
I thought that you're not satisfied with the backup job performance
The storage the job was running on was archival storage. The long version of the story is we bought a bunch of disks off Amazon to accommodate growth. I plugged them in, grew the luns, got everything running nicely. Monday morning I come in to find that 7 of the drives had failed, and the raid had critically failed on account of too many disk failures. Monday morning having yet to even drink some coffee, a melted array, and no disk-based backups, I was NOT a happy camper.

So I yanked all the bad drives and rebuilt the LUNs, however given that we had a capacity issue already, I kept our file servers on archival storage - it was definitely slow and definitely a bottleneck.. but it wasn't about to nuke my backups if I looked at it wrong. Basically I met my storage needs by moving archival tier up a notch.

Then, having replaced the failed drives and run them with an amount of data that could fit with even all of them failing for a bit of confidence building time, I was confident that this time we would be good to go and initiated the move back into faster storage, so that my archival tier can go back to archival.
Sistemi
Lurker
Posts: 1
Liked: never
Joined: Oct 25, 2017 12:30 pm
Full Name: U Sistemi
Contact:

Re: Possible new feature? Moving Backup Chains

Post by Sistemi »

Hi, i vote for this new features too.

I need to move backup files from NAS1 to NAS2 without losses
Post Reply

Who is online

Users browsing this forum: jfvm, veremin and 298 guests