-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
File server failover cluster data stored twice?
Folks,
We have a file server failover cluster with two physical machines (agent backup). The total data is approx. 17 TB. Now I want to backup that data to tape. And when I add the backup copy job as the source of the tape backup job, it says that the full backup is approx. 32 TB, i.e. double the actual data size. Does this mean that the data will be backed up twice to tape? We are using LTO-8 tapes. With 17 TB, two tapes will be enough, but if the same data is stored twice, then four tapes will be needed... And LTO-8 tapes are not cheap, and sometimes few and far between...
And a question regarding the tape job; We already have a tape backup job that backups up everything BUT the file server cluster, approx. 280 servers, to tape once a month, so we are using the monthly media sets only, and that works like a charm. However, when I create the new tape backup job for the file server cluster, and configures it exactly the same as the other tape job, it says that there is nothing to backup, and sets a new backup date one month into the future. There is a seven day retention policy in the source job, so there are one full backup and six incrementals. I want it to backup the full backup file to tape, and that is what it is doing in the other tape job, but in this new tape job it doesn't find anything to backup. What am I suddenly doing wrong?
"Process the most recent restore point instead if waiting" is activated.
PJ
We have a file server failover cluster with two physical machines (agent backup). The total data is approx. 17 TB. Now I want to backup that data to tape. And when I add the backup copy job as the source of the tape backup job, it says that the full backup is approx. 32 TB, i.e. double the actual data size. Does this mean that the data will be backed up twice to tape? We are using LTO-8 tapes. With 17 TB, two tapes will be enough, but if the same data is stored twice, then four tapes will be needed... And LTO-8 tapes are not cheap, and sometimes few and far between...
And a question regarding the tape job; We already have a tape backup job that backups up everything BUT the file server cluster, approx. 280 servers, to tape once a month, so we are using the monthly media sets only, and that works like a charm. However, when I create the new tape backup job for the file server cluster, and configures it exactly the same as the other tape job, it says that there is nothing to backup, and sets a new backup date one month into the future. There is a seven day retention policy in the source job, so there are one full backup and six incrementals. I want it to backup the full backup file to tape, and that is what it is doing in the other tape job, but in this new tape job it doesn't find anything to backup. What am I suddenly doing wrong?
"Process the most recent restore point instead if waiting" is activated.
PJ
-
- Product Manager
- Posts: 14816
- Liked: 1771 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: File server failover cluster data stored twice?
Hi PJ,
Is that a forever incremental backup job at source or with periodic active full backups?We have a file server failover cluster with two physical machines (agent backup). The total data is approx. 17 TB. Now I want to backup that data to tape. And when I add the backup copy job as the source of the tape backup job, it says that the full backup is approx. 32 TB, i.e. double the actual data size.
Monthly media set does not create backups as soon as they are created, instead the most suitable date it scheduled for the next run. You can right click the newly created job and perform ad-hoc backup for the current day.We already have a tape backup job that backups up everything BUT the file server cluster, approx. 280 servers, to tape once a month, so we are using the monthly media sets only, and that works like a charm. However, when I create the new tape backup job for the file server cluster, and configures it exactly the same as the other tape job, it says that there is nothing to backup, and sets a new backup date one month into the future.
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
Hello,
It is a forward incremental with synthetic fulls. The .vbk files are approx. 17 TB.
PJ
It is a forward incremental with synthetic fulls. The .vbk files are approx. 17 TB.
PJ
-
- Product Manager
- Posts: 14816
- Liked: 1771 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: File server failover cluster data stored twice?
PJ,
Thanks! The backup copy which is used as a source for tape job is running in periodic or immediate copy mode? Any chance that GFS retention is enabled?
Thanks! The backup copy which is used as a source for tape job is running in periodic or immediate copy mode? Any chance that GFS retention is enabled?
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
The backup copy job is in periodic copy mode, and there is no GFS retention, only a default 7 day retention policy.
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
Hmm... When I look at the .vbk files in the backup copy job, the file is 17 TB for one server and 17 TB for the other server. That seems to indicate that the same data is stored twice on disk, for some reason... Does that mean that the data will be stored twice on the tape, as well?
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
Am I getting an answer about why the same data is stored twice on disk, and also if the data will be stored twice on tape, as well?
-
- Product Manager
- Posts: 15127
- Liked: 3232 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: File server failover cluster data stored twice?
Hello,
the reason why the data is stored twice for the backup copy job is because you are using per-machine backup files setting on the repository. It's a recommended setting in general. If you want to avoid that, then you could to create a separate repository for clusters with shared volumes.
I just tried it out with a regular tape job (standard pool. no GFS media pool):
If you take the backup copy job as source, then the tape job will store 2x17 TB on tape.
If you take the backup job as source (forward incremental forever mode as far as I understood), then the tape job will store 2x17 TB to tape (virtual synthetic full backup)
If you would change your backup job mode to synthetic full every week (doesn't cost any disk space with REFS / XFS), then the tape job will copy 1x17 TB to tape. The reason is, that with synthetic fulls on the backup job, the tape job just copies the VBK file instead of synthesizing it.
Best regards,
Hannes
the reason why the data is stored twice for the backup copy job is because you are using per-machine backup files setting on the repository. It's a recommended setting in general. If you want to avoid that, then you could to create a separate repository for clusters with shared volumes.
I just tried it out with a regular tape job (standard pool. no GFS media pool):
If you take the backup copy job as source, then the tape job will store 2x17 TB on tape.
If you take the backup job as source (forward incremental forever mode as far as I understood), then the tape job will store 2x17 TB to tape (virtual synthetic full backup)
If you would change your backup job mode to synthetic full every week (doesn't cost any disk space with REFS / XFS), then the tape job will copy 1x17 TB to tape. The reason is, that with synthetic fulls on the backup job, the tape job just copies the VBK file instead of synthesizing it.
Best regards,
Hannes
-
- Product Manager
- Posts: 14816
- Liked: 1771 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: File server failover cluster data stored twice?
Hello and sorry for the delay Per. Hannes is right: the cluster disk is being copied to every independent backup of the cluster node and that's the reason you get the content duplicated.
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
Thanks for your replies!
The backup is forward incremantal with a synthetic full made every week. Does that mean that if I use the backup as the source of the tape job, instead of the backup copy, it will only copy the latest .vbk file (17 TB) to tape?
The backup is forward incremantal with a synthetic full made every week. Does that mean that if I use the backup as the source of the tape job, instead of the backup copy, it will only copy the latest .vbk file (17 TB) to tape?
-
- Product Manager
- Posts: 15127
- Liked: 3232 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: File server failover cluster data stored twice?
that's what I tried to describe, yes (not sure how I could have written it better).
ah, the "so there are one full backup and six incrementals" sounded like "forever forward incremental without synthetic full" to meThe backup is forward incremantal with a synthetic full made every week.
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
I was talking about the backup copy back then. The backup copy has a default 7 day retention policy, without GFS.
The backup, however, is a forward incremental with synthetic fulls, and a GFS policy with four monthly and four weekly fulls kept on disk.
So, if I use the backup as the source, only one copy of the data will be written to tape?
The backup, however, is a forward incremental with synthetic fulls, and a GFS policy with four monthly and four weekly fulls kept on disk.
So, if I use the backup as the source, only one copy of the data will be written to tape?
-
- Product Manager
- Posts: 15127
- Liked: 3232 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: File server failover cluster data stored twice?
yes, as long as you use a standard pool (not a GFS pool)
-
- Veteran
- Posts: 532
- Liked: 58 times
- Joined: Jun 06, 2018 5:41 am
- Full Name: Per Jonsson
- Location: Sweden
- Contact:
Re: File server failover cluster data stored twice?
Thank you!
PJ
PJ
-
- Service Provider
- Posts: 22
- Liked: 1 time
- Joined: Oct 14, 2009 4:23 am
- Contact:
Re: File server failover cluster data stored twice?
Hi,
In our case, the per-machine backup file option is not enabled and the backup copy job still copy the data of the cluster to the secondary repository twice (once per node).
We've been trying to source from the job, or from the backup, but either way, still copy the data twice.
The only work around would be to backup copy only one node, but then if a failover occur the data will not be copied over.
A little bit out of ideas here
In our case, the per-machine backup file option is not enabled and the backup copy job still copy the data of the cluster to the secondary repository twice (once per node).
We've been trying to source from the job, or from the backup, but either way, still copy the data twice.
The only work around would be to backup copy only one node, but then if a failover occur the data will not be copied over.
A little bit out of ideas here

-
- Product Manager
- Posts: 14816
- Liked: 1771 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: File server failover cluster data stored twice?
Check the target repository setting for this particular backup copy job. Source repo might not have per-vm backup files enabled but target does. Thanks!backup copy job still copy the data of the cluster to the secondary repository twice (once per node).
-
- Service Provider
- Posts: 22
- Liked: 1 time
- Joined: Oct 14, 2009 4:23 am
- Contact:
Re: File server failover cluster data stored twice?
I double checked both repository (source & target) and i can assure you that the per-machine option is not enabled.
I opened a case and it's being researched at the moment
I opened a case and it's being researched at the moment
-
- Service Provider
- Posts: 22
- Liked: 1 time
- Joined: Oct 14, 2009 4:23 am
- Contact:
Re: File server failover cluster data stored twice?
If anyone is interested in this, support has told us that it is an expected behaviour and that there would currently have a change request but there is no ETA for it.
The workaround is to "backup copy" only one node of the cluster.
The workaround is to "backup copy" only one node of the cluster.
-
- Influencer
- Posts: 13
- Liked: never
- Joined: Jan 24, 2024 7:21 pm
- Full Name: Joel Dodd
- Contact:
Re: File server failover cluster data stored twice?
Question, how did you only target one node? I can only select from backup jobs and repositories; can't select a Windows server.
Or did you just create a separate protection group for each node; created a backup job for each node; and then created a backup copy job targeting the one-node backup job?
Or did you just create a separate protection group for each node; created a backup job for each node; and then created a backup copy job targeting the one-node backup job?
-
- Service Provider
- Posts: 570
- Liked: 140 times
- Joined: Apr 03, 2019 6:53 am
- Full Name: Karsten Meja
- Contact:
Re: File server failover cluster data stored twice?
tapes are cheap now. just do it
-
- Product Manager
- Posts: 15127
- Liked: 3232 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: File server failover cluster data stored twice?
and upgrade to V12. the "virtual synthetic full" issue was solved in V12 (also the other "duplicate data" issues). If you see something different, please provide a case number.
Who is online
Users browsing this forum: No registered users and 65 guests