Comprehensive data protection for all workloads
newfirewallman
Enthusiast
Posts: 35
Liked: 2 times
Joined: Jan 20, 2015 12:08 pm
Full Name: Blake Forslund
Contact:

GFS for primary backup jobs

Post by newfirewallman » 1 person likes this post

I love your product it has so much to offer. One annoyance though is the relationship between the backup job vs the backup copy job. I really like the the file retention portion of the backup copy job and how it related to the backup job, but i don't like how the copy job retention forces me to have a minimum of 2 copies. In my scenario i have a lot of wasted space having duplicate backup data, just to accomplish a retention policy (my setup tape plays a part also). I would love to see that have the ability to be zero and only copy data from the primary job when it meets requirements from the retention policy (or add retention policy to primary job with secondary repository location and settings). This would improve the use of Veeam greatly and reduce space. The second feature that would be nice is on the retention policy (GFS) is have those backups not be full but incremental with, transform/reverse depending on the GFS policy to save space as well. Third feature would be the ability under backups/disk and in a job properties have the ability to remove from disk there a specific job vs all jobs. Sometimes there are needs to cleanup and gain space back where i would like to manually remove a job or two especially when a weekly and monthly job fall on the same day or very close. I would love to received some real honest feedback on these request as i feel they would proved outstanding value to an already great product.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Feature Request and Review

Post by foggy »

newfirewallman wrote:but i don't like how the copy job retention forces me to have a minimum of 2 copies. In my scenario i have a lot of wasted space having duplicate backup data, just to accomplish a retention policy (my setup tape plays a part also).
But this is still a backup copy job that is meant to create a copy of the VM backup in some secondary location.
newfirewallman wrote:I would love to see that have the ability to be zero and only copy data from the primary job when it meets requirements from the retention policy (or add retention policy to primary job with secondary repository location and settings).
So basically you're talking about GFS retention for the regular backup jobs here, right?

Anyway, thank you for the feedback, always appreciated!
Bunce
Veteran
Posts: 259
Liked: 8 times
Joined: Sep 18, 2009 9:56 am
Full Name: Andrew
Location: Adelaide, Australia
Contact:

Re: Feature Request and Review

Post by Bunce » 1 person likes this post

Proper GFS rotation as implemented in a number of other products was requested for years and avoided, partly due to Veeam's use of file based storage which makes it difficult to implement on the primary copy.

Very disappointing that when it was finally bought in that it forced us to use a second copy. The continued 'we know better than your business - you must keep 3 copies', while valid in some businesses, is simplistic, annoying and some might say arrogant.

Let your customers decide how they wish to implement multiple copies and provide us the flexibility to implement it how we wish. GFS shouldn't be dependant on it.
newfirewallman
Enthusiast
Posts: 35
Liked: 2 times
Joined: Jan 20, 2015 12:08 pm
Full Name: Blake Forslund
Contact:

Re: Feature Request and Review

Post by newfirewallman »

Yes i 100% agree. I would like GFS retention on the regular backup as an option. Or at a minimum creating the GFS retention from the original to a second repository, but without having to create the data so many times.
In my example i might have a 10TB file server that is backed up in a primary backup and i want to use GFS, now think how much disk space is required or wasted with duplicate data. If i have the original in primary repository at 10TB it will then create a minimum of 2 in my GFS repository 10TB plus 1 incremental, and then when it creates the first weekly or monthly etc. another 10TB. That is a very inefficient use of space.
SE-1
Influencer
Posts: 22
Liked: 5 times
Joined: Apr 07, 2015 1:42 pm
Full Name: Dirk Slechten
Contact:

[MERGED] Feature Request

Post by SE-1 »

I have a request for a new feature.

Can the GFS retention scheme can be included in normal backup jobs?

To have GFS retention you now have to work with Backup Copy Jobs.

Backup Copy jobs are not optimal using NAS devices as these devices have SATA drives and some of them on top deduplicated & compressed data.
It seems more logical & faster to transfer a new active full from the primary production storage.

We have a tier 1 storage with 96 SAS disk (48 10K disks & 48 15 disks) & 8 SSD drives, which we backup to a deduplication appliance with 27 SATA disks (50TB) netto.
We achieve very fast backups using multiple VEEAM proxy's in combination with a 10Gbit back bone and we are very happy about it.
Also the syntethic fulls are performing great.

We have configured backup copy jobs to have a retention of 12 months, but this is slower as it reads from the sata disks and this in a sequential manor.
In case a de-duplication appliance is used, it also needs to re-hydrate & uncompress the data before it is send from backup target 1 to backup target 2.

Our production storage achieves much more read performance for an active full backup as a NAS device with SATA drives ever can achieve, which is logical as the tier 1 storage has more & faster disks, and on top SSD drives.

When we check the stats from our production storage & backup targets, we notice that our tier 1 storage is doing nothing during the weekend and our backup targets are working as crazy for the backup copy jobs.
The difference of snapshot sizes for an incremental or full backup on our production virtual machines during the weekend is neglectable.

We see the same for our customers.
More then 80%-90% of our customers have very powerfull TIER1 storage boxes, which have very low performance usage during the weekend.

Also the de-duplication appliances would only store (and replicate) the changed blocks during an active full backup.

Could the GFS retention scheme be build in in normal jobs?

In this way the customer can decide wether to use backup copy jobs to have less impact on the production storage or use the GFS retention in normal jobs.

Thank you
newfirewallman
Enthusiast
Posts: 35
Liked: 2 times
Joined: Jan 20, 2015 12:08 pm
Full Name: Blake Forslund
Contact:

Re: Feature Request and Review

Post by newfirewallman »

I have been saying the same thing. Call it copy/retention/or normal backup i don't care, but have it integrate better with GFS and another Repository. Save space and IO
marco.horstmann
Veeam Software
Posts: 594
Liked: 105 times
Joined: Dec 31, 2014 3:05 pm
Full Name: Marco Horstmann
Location: Hannover, Germany
Contact:

Re: Feature Request and Review

Post by marco.horstmann » 1 person likes this post

Hi,

Poul has published an article which could be a solution for you.

http://poulpreben.com/active-full-backu ... -copy-job/

It requests a new full backup from primary backup. Read the blog post maybe it's something you seeking for.

Regards
Marco
Marco Horstmann
Senior System Engineer @ Veeam Software

@marcohorstmann
https://horstmann.in
VMware VCP
NetApp NCIE-SAN for 7-Mode and Clustered Ontap
newfirewallman
Enthusiast
Posts: 35
Liked: 2 times
Joined: Jan 20, 2015 12:08 pm
Full Name: Blake Forslund
Contact:

Re: Feature Request and Review

Post by newfirewallman »

That is almost what i want...Almost

Ideally though why even copy the full backup if it isn't needed for the archival GFS retention job. In my case this could be 10TB of data to copy over and then delete. This problem is compounded even more when it would copy over the 10TB from my primary repository, to the secondary used for archival (with dedup) another 10TB, then if it is time to great a GFS job it seems to copy it all again while still keeping the copyjob 10TB. This means i always need to have a large excessive of freespace, plus a lot of "extra" IO that isn't needed.

At least with this script it can save some of the IO between merging the jobs which is nice, but it would be really really nice if it could be handled via the GUI and without the extra write and copy.... would greatly prefer it to just come from the source job/repository when needed.
timmi2704
Expert
Posts: 100
Liked: 5 times
Joined: Jan 14, 2014 10:41 am
Full Name: Timo Brandt
Contact:

[MERGED] Feature Request - Keep Active fulls as GFS

Post by timmi2704 »

Hi again :D

I see the point in having a backup copy job which maintains the GFS retention policy based on restore points instead of backup files.

But under certain circumstances, I see a possibility of optimizing this process which would be of significant benefit for a job setup like mine.
Each backup job is set to perform active fulls on a weekly base. While trying some different job settings, this is the one which seems to suit our needs the most.
I would really like to have the possibilty to automatically keep some of the weekly active fulls as my "weekly" or "monthly" backups as long as they were successful.
In case of unsuccessful weeklies, the following incrementals might be kept as well so that there is a successful backup of each VM in the job.

Is this far-fetched or would this be of use for anyone else as well? :)

Thanks for having a look at this.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Feature Request and Review

Post by foggy »

Yep, we've already seen similar requests, thanks for the feedback!
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Feature Request and Review

Post by Shestakov »

Hi Timo!
timmi2704 wrote:In case of unsuccessful weeklies, the following incrementals might be kept as well so that there is a successful backup of each VM in the job.
If there is an unsuccessful active full, there is no need to keep making increments.
Thanks!
timmi2704
Expert
Posts: 100
Liked: 5 times
Joined: Jan 14, 2014 10:41 am
Full Name: Timo Brandt
Contact:

Re: Feature Request and Review

Post by timmi2704 »

Thanks, foggy, for merging my request :D
Shestakov wrote: If there is an unsuccessful active full, there is no need to keep making increments.
Hi Nikita.
I was talking about some specific VMs which failed in the active full and its retires but were successful in the next incremental backup. In this case, the weekly full and the incremental backup would be needed in order to have a successfull "full backup" for all VMs. Correct me if I'm wrong :)
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Feature Request and Review

Post by foggy »

You're correct, Timo.
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

[MERGED] [Feature Request] GFS in simple Backup Job

Post by VladV »

I think I've seen that this was a request some time ago but I am having difficulties in finding it.

Anyway, it would be nice to have the GFS functionality from Backup Copy in a simple Backup Job.

Using server 2012 R2 Deduplication we don't have the need for 2 onsite storage targets: one for fast recoveries and one for long term storage. We can combine them into one and schedule dedup only for older files. This way we manage to have a 14,5TB RAID10 (16 disk) storage hold 37TB of data and still have the latest month "undeduped". We currently have 90 restore points per each job, one per day, and one active full per week (classic backup). If we had GFS in the simple Backup Job, we could have spared more space by eliminating incremental restore points (which have a low dedup ratio - being incremental :)).

If you guys also believe this could be useful in other scenarios, please consider including it in a future release.

Thanks,
Vlad
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Feature Request and Review

Post by Shestakov »

Hello Vlad,
The best practice is to have at least 2 copies of backups, one of them onsite made by backup job and another offsite or on the tapes made by backup copy or backup-to-tape job. There is a GFS for backup copy job and we are adding GFS option for backup to tape jobs in v9.
Since basic backup job is not considered as a job making historical backups, GFS is not going to be an option for them.

Do you make copies of your backups?
Thanks!
VladV
Expert
Posts: 224
Liked: 25 times
Joined: Apr 30, 2013 7:38 am
Full Name: Vlad Valeriu Velciu
Contact:

Re: Feature Request and Review

Post by VladV »

Do you make copies of your backups?
Sure, we have onsite a primary storage location for the Backup Job and a robocopy job at the end of the week on a different storage medium. We also have a Backup Copy job for DR purposes on an offsite location storage. Not to mention the replication part.

Being able to smartly manage (with GFS) the onsite location backup is helpful first for being able to cram in more restore points (with or without dedupe) and second, for having those restore points close and with fast restore speeds compared to offsite backups.

We consider that it's not a good option to have a Backup Copy job do the GFS part on the same volume. It creates an unnecessary overhead in management and resources to reprocess restore points and extract VBKs and VIBs at each cycle. For a simple Backup Job (forward incremental), automatically deleting (according to a GFS policy) the increments and keeping the VBKs (eg: 1 per previous years, 12 in current year and 4 in the current month), creates more space with the added benefit of, like I said above, having a properly managed chain close to the restore location.

A thing to mention is that Backup Copy Jobs with the Forever Forward Incremental (I believe that is what it's called) schematic is not very good with dedupe. Constantly modifying the VBK decreases the dedupe ratio. I'll give you an example taking our two backup repositories (the onsite - simple backup job and offsite - backup copy job):

- the backup job has a restore point number set to 60 - daily and an active full on weekends
- the backup copy job has a restore point number set to 60 - every 3 days

- Onsite dedup performance: 30TB savings with 78% dedup rate
- Offite dedup performance: 4,4TB savings with 49% dedup rate

I understand that the basic backup job is not considered to be a historical backup solution but, maybe it should be. Not at the same level of complexity as the Backup Copy Job but much simpler like I mentioned above.
Shestakov
Veteran
Posts: 7328
Liked: 781 times
Joined: May 21, 2014 11:03 am
Full Name: Nikita Shestakov
Location: Prague
Contact:

Re: Feature Request and Review

Post by Shestakov »

Vlad, thanks for the detailed feedback!
VladV wrote:Being able to smartly manage (with GFS) the onsite location backup is helpful first for being able to cram in more restore points (with or without dedupe) and second, for having those restore points close and with fast restore speeds compared to offsite backups.
Agreed. Also you are right, that pointing backup copy job to the same repository doesn`t make a lot of sense.
However, it seems more logical to keep short-term backups onsite on the faster repository and historical ones offsite on the more reliable repository.
VladV wrote:A thing to mention is that Backup Copy Jobs with the Forever Forward Incremental (I believe that is what it's called) schematic is not very good with dedupe. Constantly modifying the VBK decreases the dedupe ratio.
We are planning to provide an option of active full for backup copy jobs in the upcoming version, so that will not be an issue.

Anyways, your request is counted. Thanks!
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

[MERGED] *Feature Request*

Post by SyNtAxx »

GFS support on 'regular' cycle backups.

-Nick
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: GFS for primary backup jobs

Post by SyNtAxx »

So, after reading the thread, I don't understand why we don't have GFS on the standard backups? There are many reasons that it could be useful. Is Veeam trying to protect us from ourselves I guess?
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: GFS for primary backup jobs

Post by Gostev »

Main reason is that GFS on primary backup repository goes against our reference architecture. And yes, at the same time this also helps to protect inexperienced backup admins from sticking with a single copy of backups.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: GFS for primary backup jobs

Post by SyNtAxx » 1 person likes this post

Why cant I do what is best for my data and make my own decisions? I feel not having GFS on regular backups Is really inflexible and forces me to shuttle data all over my data center in order to maintain proper retention periods which is really in efficient, time consuming, a waste of network bandwidth and so on. My company has a significant investment in Veeam (Enterprise Plus, over 100 sockets),sure we may not be the largest install out there, but forcing customers to your ideals is silly in my opinion. We all know we should have multiple copies of data, but let me achieve that in a manner that fits my needs. Right now all I can cover is basic 30 retention with out having to shuffle HUNDREDS of TB around hoping to get the retention I need. Your product has a lot of great features over other platforms I've used int he past. We left HP Data Protector because it was clunky when it came to protecting VMs, but I'll take that and sleep safe knowing I can easily set retention on any backup at any point in time to any length of time I desire at a moments notice.
Gostev
Chief Product Officer
Posts: 31459
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: GFS for primary backup jobs

Post by Gostev »

SyNtAxx wrote:We all know we should have multiple copies of data, but let me achieve that in a manner that fits my needs. Right now all I can cover is basic 30 retention with out having to shuffle HUNDREDS of TB around hoping to get the retention I need.
Now I am curious how can you achieve multiple copies of data without physically copying it, can you share? I assume you used GFS for primary with HP Data Protector - what was your way of creating an additional copy of backups back then? Thanks.
SyNtAxx
Expert
Posts: 149
Liked: 15 times
Joined: Jan 02, 2015 7:12 pm
Contact:

Re: GFS for primary backup jobs

Post by SyNtAxx »

Simply put, we didn't. But that is the risk *we* decided/needed to take given our situation then. We also didn't have the same number of machines to protect or the same gear we currently enjoy. You're missing the point. I should be able to make copies of my data, or not, in the manner I see fit. Maybe we don't have the storage required to keep 3 copies of data around. The way it is now, I cant even keep one full copy (The original) of data for 2 years. I cant go in and manually set my data to 'never expire' based on a current and breaking legal hold ( like i could in DP, etc).

In my opinion that is inflexible. In addition, I opened a case today for GFS inquires. There is no exception logic on your GFS retention. The example I presented above (a few failed vms, manually, but post GFS trigger time/date) does not make it into the GFS retention policy. The solution proposed by Veeam is to go in and manually *change* the Month End date (or what ever applies to you) on the failed copy job(s) to a date that will trigger a new Month end job copy. The issue there is, firstly, that is a lot of administrative over head (to change and revert for next month x your job count). Secondly, it creates additional redundant copies on storage which I may not have space to hold an entire job, potentially terrabytes. I'm sorry, I don't find that to be an enterprise level solution. There needs to be more flexibility on retention and how and where it is applied. I am open to suggestion, of course and willing to provide any additional feedback that might be required to create a solution.
TroyResources
Enthusiast
Posts: 34
Liked: 6 times
Joined: May 26, 2015 7:44 am
Full Name: TroyResourcesLimited
Contact:

Re: GFS for primary backup jobs

Post by TroyResources »

werten
Influencer
Posts: 15
Liked: never
Joined: Apr 14, 2016 3:16 pm
Full Name: werten
Contact:

[MERGED] Suggestion for a different way of handling backups/

Post by werten »

I think it's a real shame: I really believe (and others as well, apparently, with convincing arguments) that this would be very beneficial for many users in different scenarios. It would add flexibility and intuitiveness to an already great and reliable backup system. I realize that it is sometimes difficult to step away from how things have been working for a long time, but since the basic idea is already implemented in backup copy and would not hinder or prohibit current backup schemes, why not take the additional step and add it to the primary backup jobs as well? At least consider this. I believe many, certainly new and novice users, would greatly appreciate it, as it would make creating backup tasks much easier and understandable, and less backup jobs would be needed in many cases.
werten
Influencer
Posts: 15
Liked: never
Joined: Apr 14, 2016 3:16 pm
Full Name: werten
Contact:

Re: Suggestion for a different way of handling backups/versi

Post by werten »

The weekly, monthly and yearly retention settings (as they are implemented in the backup copy job) could simply be added to the Storage page of the Edit Backup Job dialog, or to the Advanced setting lying one level below that page...
jazzoberoi
Enthusiast
Posts: 96
Liked: 23 times
Joined: Oct 08, 2014 9:07 am
Full Name: Jazz Oberoi
Contact:

Re: Suggestion for a different way of handling backups/versi

Post by jazzoberoi »

+1 to adding the GFS retention settings to the primary backup job as well. This way, the backup-copy job would be used only to "COPY" the backup files to a secondary location as its name implies. Having a primary backup job and then creating a secondary Backup-Copy job only to avail of the GFS retention seems unnecessary in my opinion.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Suggestion for a different way of handling backups/versi

Post by foggy »

We appreciate your feedback. The main considerations behind the current implementation were explained here.
mkaec
Veteran
Posts: 462
Liked: 133 times
Joined: Jul 16, 2015 1:31 pm
Full Name: Marc K
Contact:

Re: GFS for primary backup jobs

Post by mkaec » 1 person likes this post

+1 for adding GFS retention to regular backup jobs. We work around this by having the primary repository and the copy repository on the same volume. Dedup minimmizes the storage impact of the copies, but the setup is fraught with inefficiencies. As far as 3-2-1, the appliance replicates the backup files to an off-site appliance. So, we are good there. We could save wasted time and compute, and reduce complexity, if GFS were available in standard backup jobs.
ginux
Novice
Posts: 3
Liked: 1 time
Joined: Jul 30, 2012 10:05 am
Full Name: Gino Calzavara
Contact:

Re: GFS for primary backup jobs

Post by ginux » 1 person likes this post

+1 for GFS retention to regular backup jobs... it should be a great feature added and I agree with all the above considerations. I hope It will release as soon as possible (v.10?); I'm going crazy to move 20 TB data for monthly and yearly vaulting.
Post Reply

Who is online

Users browsing this forum: Google [Bot], sergiosergio and 220 guests