Discussions related to exporting backups to tape and backing up directly to tape.
Post Reply
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

File server failover cluster data stored twice?

Post by perjonsson1960 »

Folks,

We have a file server failover cluster with two physical machines (agent backup). The total data is approx. 17 TB. Now I want to backup that data to tape. And when I add the backup copy job as the source of the tape backup job, it says that the full backup is approx. 32 TB, i.e. double the actual data size. Does this mean that the data will be backed up twice to tape? We are using LTO-8 tapes. With 17 TB, two tapes will be enough, but if the same data is stored twice, then four tapes will be needed... And LTO-8 tapes are not cheap, and sometimes few and far between...

And a question regarding the tape job; We already have a tape backup job that backups up everything BUT the file server cluster, approx. 280 servers, to tape once a month, so we are using the monthly media sets only, and that works like a charm. However, when I create the new tape backup job for the file server cluster, and configures it exactly the same as the other tape job, it says that there is nothing to backup, and sets a new backup date one month into the future. There is a seven day retention policy in the source job, so there are one full backup and six incrementals. I want it to backup the full backup file to tape, and that is what it is doing in the other tape job, but in this new tape job it doesn't find anything to backup. What am I suddenly doing wrong?

"Process the most recent restore point instead if waiting" is activated.

PJ
Dima P.
Product Manager
Posts: 14417
Liked: 1576 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: File server failover cluster data stored twice?

Post by Dima P. »

Hi PJ,
We have a file server failover cluster with two physical machines (agent backup). The total data is approx. 17 TB. Now I want to backup that data to tape. And when I add the backup copy job as the source of the tape backup job, it says that the full backup is approx. 32 TB, i.e. double the actual data size.
Is that a forever incremental backup job at source or with periodic active full backups?
We already have a tape backup job that backups up everything BUT the file server cluster, approx. 280 servers, to tape once a month, so we are using the monthly media sets only, and that works like a charm. However, when I create the new tape backup job for the file server cluster, and configures it exactly the same as the other tape job, it says that there is nothing to backup, and sets a new backup date one month into the future.
Monthly media set does not create backups as soon as they are created, instead the most suitable date it scheduled for the next run. You can right click the newly created job and perform ad-hoc backup for the current day.
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

Hello,

It is a forward incremental with synthetic fulls. The .vbk files are approx. 17 TB.

PJ
Dima P.
Product Manager
Posts: 14417
Liked: 1576 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: File server failover cluster data stored twice?

Post by Dima P. »

PJ,

Thanks! The backup copy which is used as a source for tape job is running in periodic or immediate copy mode? Any chance that GFS retention is enabled?
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

The backup copy job is in periodic copy mode, and there is no GFS retention, only a default 7 day retention policy.
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

Hmm... When I look at the .vbk files in the backup copy job, the file is 17 TB for one server and 17 TB for the other server. That seems to indicate that the same data is stored twice on disk, for some reason... Does that mean that the data will be stored twice on the tape, as well?
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

Am I getting an answer about why the same data is stored twice on disk, and also if the data will be stored twice on tape, as well?
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: File server failover cluster data stored twice?

Post by HannesK » 1 person likes this post

Hello,
the reason why the data is stored twice for the backup copy job is because you are using per-machine backup files setting on the repository. It's a recommended setting in general. If you want to avoid that, then you could to create a separate repository for clusters with shared volumes.

I just tried it out with a regular tape job (standard pool. no GFS media pool):

If you take the backup copy job as source, then the tape job will store 2x17 TB on tape.
If you take the backup job as source (forward incremental forever mode as far as I understood), then the tape job will store 2x17 TB to tape (virtual synthetic full backup)
If you would change your backup job mode to synthetic full every week (doesn't cost any disk space with REFS / XFS), then the tape job will copy 1x17 TB to tape. The reason is, that with synthetic fulls on the backup job, the tape job just copies the VBK file instead of synthesizing it.

Best regards,
Hannes
Dima P.
Product Manager
Posts: 14417
Liked: 1576 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: File server failover cluster data stored twice?

Post by Dima P. »

Hello and sorry for the delay Per. Hannes is right: the cluster disk is being copied to every independent backup of the cluster node and that's the reason you get the content duplicated.
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

Thanks for your replies!
The backup is forward incremantal with a synthetic full made every week. Does that mean that if I use the backup as the source of the tape job, instead of the backup copy, it will only copy the latest .vbk file (17 TB) to tape?
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: File server failover cluster data stored twice?

Post by HannesK »

that's what I tried to describe, yes (not sure how I could have written it better).
The backup is forward incremantal with a synthetic full made every week.
ah, the "so there are one full backup and six incrementals" sounded like "forever forward incremental without synthetic full" to me
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

I was talking about the backup copy back then. The backup copy has a default 7 day retention policy, without GFS.
The backup, however, is a forward incremental with synthetic fulls, and a GFS policy with four monthly and four weekly fulls kept on disk.

So, if I use the backup as the source, only one copy of the data will be written to tape?
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: File server failover cluster data stored twice?

Post by HannesK »

yes, as long as you use a standard pool (not a GFS pool)
perjonsson1960
Veteran
Posts: 463
Liked: 47 times
Joined: Jun 06, 2018 5:41 am
Full Name: Per Jonsson
Location: Sweden
Contact:

Re: File server failover cluster data stored twice?

Post by perjonsson1960 »

Thank you!

PJ
sjutras
Service Provider
Posts: 22
Liked: 1 time
Joined: Oct 14, 2009 4:23 am
Contact:

Re: File server failover cluster data stored twice?

Post by sjutras »

Hi,

In our case, the per-machine backup file option is not enabled and the backup copy job still copy the data of the cluster to the secondary repository twice (once per node).

We've been trying to source from the job, or from the backup, but either way, still copy the data twice.

The only work around would be to backup copy only one node, but then if a failover occur the data will not be copied over.

A little bit out of ideas here ;-)
Dima P.
Product Manager
Posts: 14417
Liked: 1576 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: File server failover cluster data stored twice?

Post by Dima P. »

backup copy job still copy the data of the cluster to the secondary repository twice (once per node).
Check the target repository setting for this particular backup copy job. Source repo might not have per-vm backup files enabled but target does. Thanks!
sjutras
Service Provider
Posts: 22
Liked: 1 time
Joined: Oct 14, 2009 4:23 am
Contact:

Re: File server failover cluster data stored twice?

Post by sjutras »

I double checked both repository (source & target) and i can assure you that the per-machine option is not enabled.

I opened a case and it's being researched at the moment
sjutras
Service Provider
Posts: 22
Liked: 1 time
Joined: Oct 14, 2009 4:23 am
Contact:

Re: File server failover cluster data stored twice?

Post by sjutras » 1 person likes this post

If anyone is interested in this, support has told us that it is an expected behaviour and that there would currently have a change request but there is no ETA for it.

The workaround is to "backup copy" only one node of the cluster.
Post Reply

Who is online

Users browsing this forum: No registered users and 18 guests