Agent-based backup of Windows, Linux, Max, AIX and Solaris machines.
Post Reply
perjonsson1960
Veteran
Posts: 443
Liked: 44 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Large file server backup - Best practice

Post by perjonsson1960 »

Folks,

What is the best practice for backing up a large physical file server cluster, using B&R and the Windows Agent? The amount of data is currently around 15 TB.

Which backup method is commonly used for large file servers? Should we use "forward incremental" with synthetic fulls, or "forever forward incremental"? All our backup repositories have dedup enabled, and we have one scale-out repo for the backups, and another scale-out repo for the backup copies, at another physical location. Currently all our backup jobs use a 14 day retention period, and the backup copy jobs use a 7 day retention period, but some of them use a GFS policy with weekly, monthly and yearly full backups on disk. And beyond that, we are backing up everything to tape once a month.

This file server cluster is by far the largest "device" to be backed up by B&R in our organization.

PJ
Dima P.
Product Manager
Posts: 14396
Liked: 1568 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: Large file server backup - Best practice

Post by Dima P. » 1 person likes this post

Hello Per,

Moved this post to the agent subforum.
Large physical file server cluster, using B&R and the Windows Agent? The amount of data is currently around 15 TB.
Stay with entire computer backup mode (or volume level backup mode) and make sure that you have CBT driver installed.
Which backup method is commonly used for large file servers? Should we use "forward incremental" with synthetic fulls, or "forever forward incremental"?
If you have enough space to keep multiple full backups I'd say go with forward incremental with periodic fulls. If you have enough space to keep only one full backup - forever forward incremental is the best.
All our backup repositories have dedup enabled,
Make sure you have backup job encryption disabled otherwise it will impact the deduplication ratio.
perjonsson1960
Veteran
Posts: 443
Liked: 44 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: Large file server backup - Best practice

Post by perjonsson1960 »

Thanks! :-)

PJ
perjonsson1960
Veteran
Posts: 443
Liked: 44 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: Large file server backup - Best practice

Post by perjonsson1960 »

Come to think of it, what is the main reason for forward incremental with synthetic fulls being preferred over forever forward incremental? Is it because it allows the job to create new full backup files regularly, that are not fragmented? Is that better than using "defragment and compact" and "health check" year in and year out on the same full backup file?

PJ
Dima P.
Product Manager
Posts: 14396
Liked: 1568 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: Large file server backup - Best practice

Post by Dima P. »

Well, in case of the synthetics you, indeed, keep all occupied blocks within the backup file: say, you have removed a volume completely, but it remains in the backup until you do a compact. With active full that wont happen but you load the the production instead (i.e. you have to read the entire machine for full backup instead of reading the changes during synthetic full). Synthetic full though, puts more pressure on the repository since you need to read the existing data to build a new vbk.
perjonsson1960
Veteran
Posts: 443
Liked: 44 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: Large file server backup - Best practice

Post by perjonsson1960 »

To me it seems as though you are explaining the difference between synthetic fulls and active fulls. Is that relevant to my question about forward incremental vs. forever forward incremental backups? Isn't it that with forever forward incremental, the same .vbk file is being updated forever from the incremental backups being done, and thus no new .vbk file is ever created?
Dima P.
Product Manager
Posts: 14396
Liked: 1568 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: Large file server backup - Best practice

Post by Dima P. »

To me it seems as though you are explaining the difference between synthetic fulls and active fulls.
True, sorry for that.
what is the main reason for forward incremental with synthetic fulls being preferred over forever forward incremental
I'd say most of the time it fully depends on the backup strategy - if full backup is required you can either stay with active fulls or use synthetics. Additionally, periodic fulls help you to split the backup chain 'history'. As an example: if you are backing up to tape with forever forward you must create a full backup at some point in time. In my home setup I do not use periodic full backups at all, but occasionally restart backup chain with the new full (once a year or so)
Isn't it that with forever forward incremental, the same .vbk file is being updated forever from the incremental backups being done, and thus no new .vbk file is ever created?
That's right.
Post Reply

Who is online

Users browsing this forum: No registered users and 5 guests