-
- Veteran
- Posts: 534
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Large file server backup - Best practice
Folks,
What is the best practice for backing up a large physical file server cluster, using B&R and the Windows Agent? The amount of data is currently around 15 TB.
Which backup method is commonly used for large file servers? Should we use "forward incremental" with synthetic fulls, or "forever forward incremental"? All our backup repositories have dedup enabled, and we have one scale-out repo for the backups, and another scale-out repo for the backup copies, at another physical location. Currently all our backup jobs use a 14 day retention period, and the backup copy jobs use a 7 day retention period, but some of them use a GFS policy with weekly, monthly and yearly full backups on disk. And beyond that, we are backing up everything to tape once a month.
This file server cluster is by far the largest "device" to be backed up by B&R in our organization.
PJ
What is the best practice for backing up a large physical file server cluster, using B&R and the Windows Agent? The amount of data is currently around 15 TB.
Which backup method is commonly used for large file servers? Should we use "forward incremental" with synthetic fulls, or "forever forward incremental"? All our backup repositories have dedup enabled, and we have one scale-out repo for the backups, and another scale-out repo for the backup copies, at another physical location. Currently all our backup jobs use a 14 day retention period, and the backup copy jobs use a 7 day retention period, but some of them use a GFS policy with weekly, monthly and yearly full backups on disk. And beyond that, we are backing up everything to tape once a month.
This file server cluster is by far the largest "device" to be backed up by B&R in our organization.
PJ
-
- Product Manager
- Posts: 14818
- Liked: 1772 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Large file server backup - Best practice
Hello Per,
Moved this post to the agent subforum.
Moved this post to the agent subforum.
Stay with entire computer backup mode (or volume level backup mode) and make sure that you have CBT driver installed.Large physical file server cluster, using B&R and the Windows Agent? The amount of data is currently around 15 TB.
If you have enough space to keep multiple full backups I'd say go with forward incremental with periodic fulls. If you have enough space to keep only one full backup - forever forward incremental is the best.Which backup method is commonly used for large file servers? Should we use "forward incremental" with synthetic fulls, or "forever forward incremental"?
Make sure you have backup job encryption disabled otherwise it will impact the deduplication ratio.All our backup repositories have dedup enabled,
-
- Veteran
- Posts: 534
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: Large file server backup - Best practice
Thanks! 
PJ

PJ
-
- Veteran
- Posts: 534
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: Large file server backup - Best practice
Come to think of it, what is the main reason for forward incremental with synthetic fulls being preferred over forever forward incremental? Is it because it allows the job to create new full backup files regularly, that are not fragmented? Is that better than using "defragment and compact" and "health check" year in and year out on the same full backup file?
PJ
PJ
-
- Product Manager
- Posts: 14818
- Liked: 1772 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Large file server backup - Best practice
Well, in case of the synthetics you, indeed, keep all occupied blocks within the backup file: say, you have removed a volume completely, but it remains in the backup until you do a compact. With active full that wont happen but you load the the production instead (i.e. you have to read the entire machine for full backup instead of reading the changes during synthetic full). Synthetic full though, puts more pressure on the repository since you need to read the existing data to build a new vbk.
-
- Veteran
- Posts: 534
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: Large file server backup - Best practice
To me it seems as though you are explaining the difference between synthetic fulls and active fulls. Is that relevant to my question about forward incremental vs. forever forward incremental backups? Isn't it that with forever forward incremental, the same .vbk file is being updated forever from the incremental backups being done, and thus no new .vbk file is ever created?
-
- Product Manager
- Posts: 14818
- Liked: 1772 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Large file server backup - Best practice
True, sorry for that.To me it seems as though you are explaining the difference between synthetic fulls and active fulls.
I'd say most of the time it fully depends on the backup strategy - if full backup is required you can either stay with active fulls or use synthetics. Additionally, periodic fulls help you to split the backup chain 'history'. As an example: if you are backing up to tape with forever forward you must create a full backup at some point in time. In my home setup I do not use periodic full backups at all, but occasionally restart backup chain with the new full (once a year or so)what is the main reason for forward incremental with synthetic fulls being preferred over forever forward incremental
That's right.Isn't it that with forever forward incremental, the same .vbk file is being updated forever from the incremental backups being done, and thus no new .vbk file is ever created?
Who is online
Users browsing this forum: Amazon [Bot] and 13 guests