Comprehensive data protection for all workloads
Post Reply
Mattias
Influencer
Posts: 14
Liked: 3 times
Joined: Jan 21, 2015 1:18 pm
Contact:

Feature request for GFS

Post by Mattias »

I cant see that it is possible so i ask. : -)

Have it been considered to add to backupcopy and GFS settings to be able to save GFS backups to different repositories? As an example i want week GFS on one repository and month and year GFS on another repository.

Regards Mattias
veremin
Product Manager
Posts: 20415
Liked: 2302 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Feature request for GFS

Post by veremin »

Is it dictated by special company requirements or something? Just trying to understand the use case. For now, you can achieve your goal by using two backup copy jobs that use different GFS settings and are pointed to two different repositories.
Gostev
Chief Product Officer
Posts: 31814
Liked: 7302 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Feature request for GFS

Post by Gostev »

It makes sense even solely for disk space consideration (as no repository is large enough to keep a few years of GFS in large environment).
Mattias
Influencer
Posts: 14
Liked: 3 times
Joined: Jan 21, 2015 1:18 pm
Contact:

Re: Feature request for GFS

Post by Mattias »

Gostev wrote:It makes sense even solely for disk space consideration (as no repository is large enough to keep a few years of GFS in large environment).
Yes this, I have customers that by "law" must archive data for 10 years, 11 months 5 weeks 7 days. When a full backup files is closing into 5 TB - 10 TB (this is probably a small GFS compared to what i imagine large companies have) then many customers start to look to offload backup to other storage devices.

Also if you have a storage repository and it start to run out of space its not always possible to extend the the current one. You might end up buying a new one but want to use the old one for yearly GFS. As you see there is some different cases where it would be suitable to be able to chose different repository's for your GFS.

Its might be possible (depending on how big a GFS is) ,as said, to create different jobs with different GFS policy but it will be admin overhead and jut make it more complex and more jobs to keep track of. Its often a lot of jobs already in medium size environments.
Tijz
Service Provider
Posts: 34
Liked: 4 times
Joined: Jan 20, 2012 10:03 am
Full Name: Mattijs Duivenvoorden
Contact:

[MERGED] Feature Request: Store GFS chain to different Repository

Post by Tijz »

Hi All,

Wouldn't it be great if it was possible to store the big chunky full backups generated by the GFS schedule on a completely different repository than the normal backup copy chain? This way I could make full use of the ReFS block cloning on our primary repository, thus making the 'copy job file merges' much, much faster, and use Windows Storage Deduplication on our 'GFS repository' to dedupe the full backupfiles generated by GFS.

Or is there a way to do this now? Aside from using (powershell)scripting of course.

-Mattijs
Rick.Vanover
Veeam Software
Posts: 712
Liked: 168 times
Joined: Nov 30, 2010 3:19 pm
Full Name: Rick Vanover
Location: Columbus, Ohio USA
Contact:

Re: Feature Request: Store GFS chain to different Repository

Post by Rick.Vanover »

Have you considered using the Scale-Out Backup Repository? It has a policy that will explicitly let you set a performance policy that will set a rule that full backups go to a designated repository (or repositories) and incrementals go as well to designated repository or repositories. I just blogged this awesomeness recently: https://www.veeam.com/blog/scale-out-ba ... itory.html
Tijz
Service Provider
Posts: 34
Liked: 4 times
Joined: Jan 20, 2012 10:03 am
Full Name: Mattijs Duivenvoorden
Contact:

Re: Feature Request: Store GFS chain to different Repository

Post by Tijz »

Hi Rick,

Yes I have, but I don't think it will work.
Like you say, using the performance policy, ALL full backup files will be put together on a seperate repository from the incremental backups, including the fullbackup file of 'primary backup chain'. Thus preventing use of the ReFS block cloning feature for file merge operations. Another downside of doing this, is that using a deduplication appliance as the reposirory that stores the VBK's, will introduce a severe performance penalty on all restores operations, because the VBK of the primary chain is also stored on this repository.

So that's the reason I made the request;

Make it possible to store the 'primary backup chain' (consisting of all incrementals AND the VBK) on one repository, and all VBK's generated by the GFS feature on a different repository, where you can implement a different storage solution like deduplication or whatever. I don't mind where this is configured (as part of the SOBR with a policy, or at the job level...)

I did look into another standard option that comes with Veeam; the File Copy Job. But this also won't work, because it's not possible to make selection based on wildcards. If that was possible, I could schedule a File Copy Job to run after the copy jobs, to copy *M.VBK or *.VBK with modified data > 1 week to another repository. But sadly that's not possible as far as I know.

EDIT: But I also just read your blog, and if it is to believed some new feautures are coming soon, including some related to "data management and new locations". Might these be the ones I'm looking for?? :)

-Mattijs
Post Reply

Who is online

Users browsing this forum: Google [Bot], Kazz, lohelle and 75 guests