Comprehensive data protection for all workloads
Post Reply
mdiver
Veeam Legend
Posts: 201
Liked: 33 times
Joined: Nov 04, 2009 2:08 pm
Location: Heidelberg, Germany
Contact:

SOBR placement policy disregarded for GFS

Post by mdiver »

Having a SOBR with "data locality" as a placement policy lets a backup job fail, when the chain's extent is not available.

This is expected and crucial when trying to leverage the advantages of ReFS repositories (Fast-Clone).

This is not the case for the creation of GFS restore points unfortunately.
I you happen to have an extent saturated during backup by means of its allowed threads, a GFS point could be generated on a less utilized repository.
This of course invalidates the space efficency of ReFS. The new GFS point is fully independed and consumes the full space. Also you end up with a "partial fast clone" in your jobs- not only loosing space but also time for synthetic operations.

We just closed case #04257980 on this behavior. It seems to be by design.
To me it would be more logic to wait for the extent to be less utilized before generating the GFS point or even fail the generation instead of inflating the GFS to full size on another extent.

Thanks,
Mike
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR placement policy disregarded for GFS

Post by Gostev »

Hi, Mike.

This makes absolutely no sense to me, as GFS backups are regular full backups, they are not somehow special in anyway. They are regular periodic fulls that are marked for the retention policy not to remove them until certain date, but this marking happens after the backup file is created. In other words, we don't have a special function that says "create a GFS backup now" or anything like this.

As such, there's absolutely NO possibility that GFS backups can be treated differently from regular periodic full, because they ARE very regular periodic fulls. Moreover, the resource scheduler does not have an information whether the given full backup will be a GFS full or not in the first place. So, any "special" logic of treating them as far as placement goes is simply impossible even in theory.

I suggest you re-open a case, and have them continue researching this.

Thanks!
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: SOBR placement policy disregarded for GFS

Post by Gostev »

Note: I'm talking about primary backup job GFS above. But I just realized you may be talking about Backup Copy GFS. This one has quite different behavior today, which is very complex and this may result in some weird stuff like above.

We're actually removing all this complexity in v11, and making Backup Copy GFS retention exactly the same as the primary backup jobs GFS retention in v10: time-based, simple and straightforward. Basically, we got really tired of all the weird aftershocks of the current logic :D could not stabilize it in 5 years, still finding bugs.
mdiver
Veeam Legend
Posts: 201
Liked: 33 times
Joined: Nov 04, 2009 2:08 pm
Location: Heidelberg, Germany
Contact:

Re: SOBR placement policy disregarded for GFS

Post by mdiver »

Hi Anton.

Your 2nd post got me right. :)

GFS backups should never be treated differently. That was also my point.

But we've seen it in a larger environment with a SOBR across 8 extents on several Windows 2016 servers with ReFS. Per VM chains.
After months of correctly "Fast-Cloning" the job all of a sudden switched to "Partial Fast-Clone" which we investigated with Veeam support.

According to support there was a bug in the ressource scheduler lead to the new VBK being generated on another extent.

Code: Select all

[15.06.2020 21:37:34] <34> Error    Unable to find scale-out repository extent with previous backup files. (for storage [79046814-6d55-4e50-a028-3c16901cdb41]) (Veeam.Backup.Common.Sources.Exceptions.RequiredExtentMissingException)
Even now, I would have expected the job to fail - because of "Data-Locality". But it just generated a VBK on another extent and later continued the chain there.

The mentioned problem with the Scheduler (delays) is said to be fixed with P2.

We also changed SOBRSyntheticFullCompressRate to 35 and enabled SobrForceExtentSpaceUpdate.
To prevent this from happening in the future.

Thanks,
Mike
Post Reply

Who is online

Users browsing this forum: No registered users and 123 guests