-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
SoBR backup policies useless due to load control
Hello,
We're running a SOBR with two Windows2016 ReFS extends (2x 40TB), and are running Synthetic fulls(fast clone). There is at least 10TB free on both extends. Policy is "data locality".
ReFS savings are great, and it has been running fantastic for two years.
Data has grown a little bit, but not by huge amounts.
However, lately we have issues. What we are experiencing sometimes(more on that later) from one day to another both extends are filled to the max and a lot of jobs fail due to no space left.
We have two large VM's(7TB) and a few smaller(from 2-5TB) but most are much smaller(30-100GB).
What seems to happen is, is that Veeam all of a sudden(I know now why) puts VM's(large ones) on the other extend. That extend is getting filled to the max quickly(no ReFS or other compression savings). Because that extend is full, all backups normally going to that extend are going to the other extend. Eventually everything is full and stuck.
the only way to get things going is to delete entire VM's from backup.
So I created a call with veeam support(# 03444355), problem being: "Data locality" policy sometimes not honored/working.
They quickly found out what the problem is: Veeam switches to the other extend because of the "Limit maximum concurrent tasks" Load control setting. This is set to 6 for both extends. Task number 7 is not going to wait for the "correct"(data locality) extend but just uses the other one.
This is by design, meaning that for the last two years I was lucky to have it working like it did(or backups jobs just finished a little sooner).
It also means that the "data locality" policy does not really work for "forever" full's. The solution from Veeam is to not use SORB(back to the days where I was moving files around because of unevenly filled repo's - that is especially going to suck with ReFS) or to not use "Full" backups and thus not using ReFS in any useful manner. Also, I don't have the luxury(storage space) to "just start fresh". Those backups are important.
With SOBR, "data locality" policy and (synthetic) fulls the only option I have is to make sure(myself) that there are never more then 6 tasks running at any given moment. Not sure how I'm going to to that in our case.
Now, in my opinion this is not a "feature", but a bug. "Data locality" should not be ignored because of throttling.
Does anyone have a smart idea on how to solve this? Thank you in advance!
--Mark
We're running a SOBR with two Windows2016 ReFS extends (2x 40TB), and are running Synthetic fulls(fast clone). There is at least 10TB free on both extends. Policy is "data locality".
ReFS savings are great, and it has been running fantastic for two years.
Data has grown a little bit, but not by huge amounts.
However, lately we have issues. What we are experiencing sometimes(more on that later) from one day to another both extends are filled to the max and a lot of jobs fail due to no space left.
We have two large VM's(7TB) and a few smaller(from 2-5TB) but most are much smaller(30-100GB).
What seems to happen is, is that Veeam all of a sudden(I know now why) puts VM's(large ones) on the other extend. That extend is getting filled to the max quickly(no ReFS or other compression savings). Because that extend is full, all backups normally going to that extend are going to the other extend. Eventually everything is full and stuck.
the only way to get things going is to delete entire VM's from backup.
So I created a call with veeam support(# 03444355), problem being: "Data locality" policy sometimes not honored/working.
They quickly found out what the problem is: Veeam switches to the other extend because of the "Limit maximum concurrent tasks" Load control setting. This is set to 6 for both extends. Task number 7 is not going to wait for the "correct"(data locality) extend but just uses the other one.
This is by design, meaning that for the last two years I was lucky to have it working like it did(or backups jobs just finished a little sooner).
It also means that the "data locality" policy does not really work for "forever" full's. The solution from Veeam is to not use SORB(back to the days where I was moving files around because of unevenly filled repo's - that is especially going to suck with ReFS) or to not use "Full" backups and thus not using ReFS in any useful manner. Also, I don't have the luxury(storage space) to "just start fresh". Those backups are important.
With SOBR, "data locality" policy and (synthetic) fulls the only option I have is to make sure(myself) that there are never more then 6 tasks running at any given moment. Not sure how I'm going to to that in our case.
Now, in my opinion this is not a "feature", but a bug. "Data locality" should not be ignored because of throttling.
Does anyone have a smart idea on how to solve this? Thank you in advance!
--Mark
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: SoBR backup policies useless due to load control
I guess the workaround would be to remove the task limit on the backup repository. As long as your backup jobs don't start showing the target as the bottleneck, you should be good.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
Thank you for the response. Yes, that is one thing I could do. However, as I have a lot of Jobs, I risk "overloading" the repository servers.
Maybe I could throttle it a bit with the backup proxy's.
Maybe I could throttle it a bit with the backup proxy's.
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: SoBR backup policies useless due to load control
Yes that's what I do. Though I agree in the case of ReFS, SOBR should have the option of never violating data locality even when an extent is full or otherwise unavailable.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
I'm having a similar issue, with the additional problem of the space require heuristic not seeming to take into account that I am using a ReFS volume.
The SOBR with ReFS functionality seems pretty broken at the moment.
The SOBR with ReFS functionality seems pretty broken at the moment.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
Yes IanJ, his is also happening in my case. Not at first(that seems to be the throttling issue), but after a while all jobs fail at a moment that there is is more then enough space(except for the VM's that it dumps on the wrong extend).
It's probably working as designed, but the design is indeed broken especially for ReFS/SoBR.
If I have two choices "Data locality" and "performance" and I do not want performance(then throttling very relevant) but I want to have "Data locality", then throttling should not be a consideration. But is is.
Would be nice if Veeam sees this as a bug(or missing feature ), or explain why they chose to design it like this.
Is I look at the documentation here: https://helpcenter.veeam.com/docs/backu ... 4#locality
There is no mention of ReFS or synthetic fulls which is strange, because it's a different beast then just active fulls or incrementals.
It's probably working as designed, but the design is indeed broken especially for ReFS/SoBR.
If I have two choices "Data locality" and "performance" and I do not want performance(then throttling very relevant) but I want to have "Data locality", then throttling should not be a consideration. But is is.
Would be nice if Veeam sees this as a bug(or missing feature ), or explain why they chose to design it like this.
Is I look at the documentation here: https://helpcenter.veeam.com/docs/backu ... 4#locality
There is no mention of ReFS or synthetic fulls which is strange, because it's a different beast then just active fulls or incrementals.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
Actually there is some ReFS-specific logic in extent selection. In case of ReFS, Veeam B&R selects the same extent where previous full is stored, ignoring the fact that there's another one with more free space available. However, heuristics cannot estimate what the full will actually take due to FastClone, so if it thinks there's not enough space for the full, then the policy is still violated. The workaround here is to reduce the % of VM size in backup used for space estimation with the help of SOBRSyntheticFullCompressRate registry value (default is 100).
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
Thanks foggy, that looks promising.
If I understand correctly, according to documentation for Backup Size Estimation: "The size of a full backup file is equal to 50% of source VM data." So setting SOBRSyntheticFullCompressRate to 50, would mean that in my case (for all synthetic fulls) it's going to estimate "space needed" for backup to 25%?
Or do I not understand it correctly?
If I understand correctly, according to documentation for Backup Size Estimation: "The size of a full backup file is equal to 50% of source VM data." So setting SOBRSyntheticFullCompressRate to 50, would mean that in my case (for all synthetic fulls) it's going to estimate "space needed" for backup to 25%?
Or do I not understand it correctly?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
Hi Mark, your understanding is correct.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
Hi Foggy,
Thanks for the heads up. I am unable to find any reference to the registry key: SOBRSyntheticFullCompressRate in any of the Veeam documentation (maybe site search is broken).
Also Google is drawing a blank when searching for this.
Is there a support document we can reference that talks about using this?
In the absence of finding any can I confirm a couple of things.
On your Veeam backup and replication server create a DWORD SOBRSyntheticFullCompressRate under HKLM\Software\Veeam\Veeam Backup and Replication.
Change its value to tell Veeam what percentage of current backup size / 2 to use when estimating the size of new Synthetic full.
Can I confirm the name of the DWORD is SOBRSyntheticFullCompressRate and not SobrSyntheticFullCompressRate.
Thanks for the heads up. I am unable to find any reference to the registry key: SOBRSyntheticFullCompressRate in any of the Veeam documentation (maybe site search is broken).
Also Google is drawing a blank when searching for this.
Is there a support document we can reference that talks about using this?
In the absence of finding any can I confirm a couple of things.
On your Veeam backup and replication server create a DWORD SOBRSyntheticFullCompressRate under HKLM\Software\Veeam\Veeam Backup and Replication.
Change its value to tell Veeam what percentage of current backup size / 2 to use when estimating the size of new Synthetic full.
Can I confirm the name of the DWORD is SOBRSyntheticFullCompressRate and not SobrSyntheticFullCompressRate.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
I set the value to 50(decimal), and rebooted the VBR server.
As I normally have enough space free can't tell the difference now, but I think this a better value for ReFS/fast clone.
As I normally have enough space free can't tell the difference now, but I think this a better value for ReFS/fast clone.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
Enable to edit my last post, but wanted to say that as a customer I really appreciate this forum.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
Hi Guys,
I did some testing of this last night and the above value doesn't appear to apply to backup copy jobs specifically when they are doing GFS merges.
Veeam still tries to put the merge file on a different extent.
Slightly different issue I know but same root cause (ReFS and Veeam space heuristics). Just a heads up for anyone else dealing with this.
I will update this thread if support can give me a solution.
I did some testing of this last night and the above value doesn't appear to apply to backup copy jobs specifically when they are doing GFS merges.
Veeam still tries to put the merge file on a different extent.
Slightly different issue I know but same root cause (ReFS and Veeam space heuristics). Just a heads up for anyone else dealing with this.
I will update this thread if support can give me a solution.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
GFS is a different story, the value works for synthetic full backup only. In case of GFS retention though, full backup shouldn't be placed on another extent, it should always be kept along with the chain it belongs to. if you see different behavior, I recommend contacting support.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
Thanks Foggy.
I am seeing the merges happen on a different extent (so no fast clone),or merges failing because Veeam thinks none of the extents have enough free space.
Again its just not taking into account ReFS space saving.
I am working around it by manually moving chains around between extents, and will add an additional extent to the SOBR sometime today, however I can only do this temporarily.
Struggling to get your level 1 guys to understand the issue, hopefully we make some progress today.
Cheers
Ian
I am seeing the merges happen on a different extent (so no fast clone),or merges failing because Veeam thinks none of the extents have enough free space.
Again its just not taking into account ReFS space saving.
I am working around it by manually moving chains around between extents, and will add an additional extent to the SOBR sometime today, however I can only do this temporarily.
Struggling to get your level 1 guys to understand the issue, hopefully we make some progress today.
Cheers
Ian
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
How do you tell this, do you see a GFS restore point created on a different extent from the one the rest of the chain is stored?I am seeing the merges happen on a different extent (so no fast clone)
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
First indicator is you will see the merge operation report partial fast clone.
If you look at the extents you will then see .temp files that are being created on extents that don't contain the existing files in the chain.
For example everything is on H:, creates the .temp merge on I:
It literally the exact same problem with ReFS and backup jobs that we have been discussing. ReFS space saving isn't being taken into account by the backup copy job when its choosing its extent.
You can get around it by waiting until all the actual fast clones complete, disable the backup copy job (this causes any merge operations still going to abort and their .temp file to be deleted) , re-enabled it, the aborted / failed merges will try again, waiting until all the fast clones complete, and repeat until everything actually fast clones.
So it looks like SobrForceExtentSpaceUpdate doesn't apply to backup copy jobs and you need something like a SOBRGFCMergeCompressRate registry key as well.
If you look at the extents you will then see .temp files that are being created on extents that don't contain the existing files in the chain.
For example everything is on H:, creates the .temp merge on I:
It literally the exact same problem with ReFS and backup jobs that we have been discussing. ReFS space saving isn't being taken into account by the backup copy job when its choosing its extent.
You can get around it by waiting until all the actual fast clones complete, disable the backup copy job (this causes any merge operations still going to abort and their .temp file to be deleted) , re-enabled it, the aborted / failed merges will try again, waiting until all the fast clones complete, and repeat until everything actually fast clones.
So it looks like SobrForceExtentSpaceUpdate doesn't apply to backup copy jobs and you need something like a SOBRGFCMergeCompressRate registry key as well.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
This shouldn't be the case, could you please open a case so we could take a closer look? Thanks!If you look at the extents you will then see .temp files that are being created on extents that don't contain the existing files in the chain.
-
- Influencer
- Posts: 11
- Liked: never
- Joined: Mar 19, 2019 1:44 am
- Full Name: Ian Jackson
- Contact:
Re: SoBR backup policies useless due to load control
I fixed it myself, so its no longer occurring.
I added an additional extent, moved some backup files onto it and force it to do fast clones with the method described above and then offloaded all I could into azure blob storage.
Merge happened last night and everything fast clone and stayed on the same extent.
I did try an engage with your support but I don't think your level 1 guys understand ReFS and I didn't have time to talk them through it unfortunately.
The issue is however 100% when the backup copy job comes to do the merges it thinks it doesn't have space on the extent to create the merges so puts it on a different extent.
You should be able to reproduce it in your labs.
There are other ways you could work around it that weren't available to me.
Easiest, extend your LUN (if using a SAN for your storage) with thin provisioning, so the extent appears larger to Veeam.
Split up your backup copy jobs, don't have a number of large VMs in the same backup copy job.
As an aside, this is about 20TB of backups a week. I'm in the process of taking around 10TB out of this cycle by shifting the storage to cloud for those servers and then shifting the Veeam datastore onto a different SAN which allows me to thin provision larger LUNS so this will no longer be a problem.
I added an additional extent, moved some backup files onto it and force it to do fast clones with the method described above and then offloaded all I could into azure blob storage.
Merge happened last night and everything fast clone and stayed on the same extent.
I did try an engage with your support but I don't think your level 1 guys understand ReFS and I didn't have time to talk them through it unfortunately.
The issue is however 100% when the backup copy job comes to do the merges it thinks it doesn't have space on the extent to create the merges so puts it on a different extent.
You should be able to reproduce it in your labs.
There are other ways you could work around it that weren't available to me.
Easiest, extend your LUN (if using a SAN for your storage) with thin provisioning, so the extent appears larger to Veeam.
Split up your backup copy jobs, don't have a number of large VMs in the same backup copy job.
As an aside, this is about 20TB of backups a week. I'm in the process of taking around 10TB out of this cycle by shifting the storage to cloud for those servers and then shifting the Veeam datastore onto a different SAN which allows me to thin provision larger LUNS so this will no longer be a problem.
-
- Enthusiast
- Posts: 57
- Liked: 8 times
- Joined: Jul 13, 2009 12:50 pm
- Full Name: Mark
- Location: The Netherlands
- Contact:
Re: SoBR backup policies useless due to load control
I can say that SOBRSyntheticFullCompressRate + limiting tasks on the backup proxy's does have the desired effect. Never had any issues since the changes. Thanks for that!
But Veeam b&R could be better though if it started treating ReFS/synthetic full's/fast clone as a separate beast and not just like any other full. Fixing the data locality policy(that really does not do data locality but chooses performance over data locality when concurrent tasks are full ) would not hurt either.
But Veeam b&R could be better though if it started treating ReFS/synthetic full's/fast clone as a separate beast and not just like any other full. Fixing the data locality policy(that really does not do data locality but chooses performance over data locality when concurrent tasks are full ) would not hurt either.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: SoBR backup policies useless due to load control
Glad it helped and thanks for the feedback - I admit there's some room for improvement in this area.
Who is online
Users browsing this forum: iDeNt_5, NickyP101, veremin and 291 guests