Due to circumstances beyond my control, we won't have Object storage until late 2020/2021 - and in our enterprise we have literally thousands of guests requiring both a daily retention and (required) quarterly backups. Typically speaking we've been doing monthly backups.
GFS on the primary job looks great, however:
A) it looks like the flagged fulls (either synthetic or active) are going to remain on the performance tier regardless.
B) using the secondary location will send more than just the VBK's over - we currently leverage DataDomain to host the monthly/quarterly content
C) The order of retention and priority with incorporating a backup copy job into that mix is even more challenging.
Is there something I'm missing, or are we still stuck maintaining two different pools of jobs:
1) daily jobs, going to disk
2) monthly jobs, going to data domain
Managing dual enrollment for guests is quite the pain, and we can't simply grab clusters with guest counts ranging far outside a recommended single job.
-
- Novice
- Posts: 7
- Liked: 1 time
- Joined: Apr 16, 2020 6:02 pm
- Full Name: Justin Pauls
- Contact:
-
- Product Manager
- Posts: 14840
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Using GFS with secondary location
Hello,
I'm not sure, whether I understand the question.
To maintain the 3-2-1 rule: two jobs are required without capacity tier. The setup you mention with daily backup job + monthly backup copy job is the easiest way (just keep in mind the maximum chain length of data domain, if you need more than 60 months)
Without object storage, there is no capacity tier. If you only need let's say 14 days and 6 months, then you can also create a single backup job with GFS. Then there is no secondary location, not copy and no tricky backup copy job retention.
In general: the backup copy job maintains a completely independent backup chain. It does not copy files. I copies only the relevant data.
Best regards,
Hannes
I'm not sure, whether I understand the question.
To maintain the 3-2-1 rule: two jobs are required without capacity tier. The setup you mention with daily backup job + monthly backup copy job is the easiest way (just keep in mind the maximum chain length of data domain, if you need more than 60 months)
Without object storage, there is no capacity tier. If you only need let's say 14 days and 6 months, then you can also create a single backup job with GFS. Then there is no secondary location, not copy and no tricky backup copy job retention.
In general: the backup copy job maintains a completely independent backup chain. It does not copy files. I copies only the relevant data.
Best regards,
Hannes
Who is online
Users browsing this forum: CoLa, cserban, galcand, joast, Majestic-12 [Bot], Semrush [Bot] and 315 guests