-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Reverse Incremental takes too long?
Hello!
We have a location where we have Veeam in a very small setup, with just 1 ESXi 6 host and like 10 VMs to be backupped.
Veeam Backup and Replication v9.5 is running on one of the VMs, because there is no other host in that location.
Backup Target is a small QNAP NAS with a CIFS share.
We chose to do Reverse incremental backup.
It was running fine for like 5 or 6 backup cycles.
Then the backup of one of the VMs made problems. It is the largest one (Fileserver with several TB of data).
The backup lasted longer and longer each backup cycle. Until one day it lasted that long, that it started to overlap with the next backup cycle.
It seems that is, because reverse incremental has to change also all the previous restore points in a reverse manner. And the more restore points you have, the longer its gonna last. At least thats my possible explaination for this.
What could you recommend us for such a small setup?
I read now, that NFS seems to be more preferred or faster than SMB/CIFS. Well we can/will change that.
But i guess this will not resolve our problem in this case.
Im not sure which one setting i should pick here.
I read a lot of infos and threads now, and it seems no matter if you have Reverse incremental, or Incremental with synthetic fulls/Transform: You will always end up with heavy IO on the backup target side.
Maybe its the best, to do incremental and forward forever with Full now and then? But that sounds pretty stupid to me. Its pretty big data there, and it took like more than 3 days to initally Fullbackup it. Should i avoid normal Fulls, or are they my only possiblity here?
Thx in advance!
ND.
We have a location where we have Veeam in a very small setup, with just 1 ESXi 6 host and like 10 VMs to be backupped.
Veeam Backup and Replication v9.5 is running on one of the VMs, because there is no other host in that location.
Backup Target is a small QNAP NAS with a CIFS share.
We chose to do Reverse incremental backup.
It was running fine for like 5 or 6 backup cycles.
Then the backup of one of the VMs made problems. It is the largest one (Fileserver with several TB of data).
The backup lasted longer and longer each backup cycle. Until one day it lasted that long, that it started to overlap with the next backup cycle.
It seems that is, because reverse incremental has to change also all the previous restore points in a reverse manner. And the more restore points you have, the longer its gonna last. At least thats my possible explaination for this.
What could you recommend us for such a small setup?
I read now, that NFS seems to be more preferred or faster than SMB/CIFS. Well we can/will change that.
But i guess this will not resolve our problem in this case.
Im not sure which one setting i should pick here.
I read a lot of infos and threads now, and it seems no matter if you have Reverse incremental, or Incremental with synthetic fulls/Transform: You will always end up with heavy IO on the backup target side.
Maybe its the best, to do incremental and forward forever with Full now and then? But that sounds pretty stupid to me. Its pretty big data there, and it took like more than 3 days to initally Fullbackup it. Should i avoid normal Fulls, or are they my only possiblity here?
Thx in advance!
ND.
-
- VP, Product Management
- Posts: 27375
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Reverse Incremental takes too long?
Hello,
Let me guess that the reported bottleneck from the backup job is target, right? Selecting the NFS protocol would not help here, but switching to forward incremental with periodic active fulls should put less stress on your target repository, so you're correct here.
Not sure if you've read it or not, but here is a list on existing topics with a similar issue:
Bottleneck Target Qnap ts-210 very slowly
Reverse incremental Vs forever forward with synthetic fulls
Thank you!
Let me guess that the reported bottleneck from the backup job is target, right? Selecting the NFS protocol would not help here, but switching to forward incremental with periodic active fulls should put less stress on your target repository, so you're correct here.
Not sure if you've read it or not, but here is a list on existing topics with a similar issue:
Bottleneck Target Qnap ts-210 very slowly
Reverse incremental Vs forever forward with synthetic fulls
Thank you!
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Re: Reverse Incremental takes too long?
Hello Vitaliy!
Thanks for your infos!
Yes you are right - the bottleneck is the target/IOs in this case. Not the network bandwidth or the source.
In this setup, there is one more thing to consider:
The target's capacity is around 10 TB, and the large source-VM is 3-4 TB right now.
It might grow, possibly. So we want to avoid having multiple Fulls on the target. Having 1 Full is enough, otherwise space might be exceeded some day.
I switched to Forever Forward now. Without synthetic fulls.
I will try this for some time, maybe this resolves the problem.
As your link said, Reverse incremental is using Random-IOs heavily.
I chose no synthetic fulls, because of the space. Because if i interpret the Veeam documentation correctly, it does a synthetic full and ADDS it, without removing previous fulls in the same step.
Thats why i chose forever forward.
I think forever forward possibly is best for people who want to avoid the maximum of IOs and source-traffic.
I still wonder how long it takes on the target QNAP to merge the last oldest restorepoint into the Full when retention is exceeded. But i will find that out.
It should be faster than a Synthetic full i guess, and faster than Reverse Incremential hopefully.
Regards,
ND
Thanks for your infos!
Yes you are right - the bottleneck is the target/IOs in this case. Not the network bandwidth or the source.
In this setup, there is one more thing to consider:
The target's capacity is around 10 TB, and the large source-VM is 3-4 TB right now.
It might grow, possibly. So we want to avoid having multiple Fulls on the target. Having 1 Full is enough, otherwise space might be exceeded some day.
I switched to Forever Forward now. Without synthetic fulls.
I will try this for some time, maybe this resolves the problem.
As your link said, Reverse incremental is using Random-IOs heavily.
I chose no synthetic fulls, because of the space. Because if i interpret the Veeam documentation correctly, it does a synthetic full and ADDS it, without removing previous fulls in the same step.
Thats why i chose forever forward.
I think forever forward possibly is best for people who want to avoid the maximum of IOs and source-traffic.
I still wonder how long it takes on the target QNAP to merge the last oldest restorepoint into the Full when retention is exceeded. But i will find that out.
It should be faster than a Synthetic full i guess, and faster than Reverse Incremential hopefully.
Regards,
ND
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Reverse Incremental takes too long?
Small NAS devices simply don't have sufficient IOPS to perform any synthetic transformations. To get a reasonable performance, you should use active full backups - this will transform the backup workload into sequential. This will also give you more independent restore points in case one of your backup files gets corrupted (which happens way too often with low-end NAS when writing to them via CIFS). And in all honesty, if I was you - I would worry about this reliability issue, rather than about slow transformation performance or high disk space usage with periodic fulls.
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Re: Reverse Incremental takes too long?
But there are a few things to consider about Acitve Fulls:
The disk space on the NAS might not be sufficient for having 2 Fulls at the same time there.
An Active Full lasts around 4 days, which means it overlaps with 3 other backup cycles.
If the only reason for active fulls is to overcome possible file corruption on the target: Wouldn't it be better to use "Perform health checks periodically" now and then?
They don't last as long. In our tests they were like 8 times faster than doing a full.
The disk space on the NAS might not be sufficient for having 2 Fulls at the same time there.
An Active Full lasts around 4 days, which means it overlaps with 3 other backup cycles.
If the only reason for active fulls is to overcome possible file corruption on the target: Wouldn't it be better to use "Perform health checks periodically" now and then?
They don't last as long. In our tests they were like 8 times faster than doing a full.
-
- VP, Product Management
- Posts: 27375
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Reverse Incremental takes too long?
I guess that Anton was trying to say that any synthetic transformation even in the forever forward backup chain will have an impact on your destination target, so to lower this down an active full backup is recommended. Thanks!
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Re: Reverse Incremental takes too long?
I understand your point. Im sure thats correct in many setups and cases.
But i believe that depends on the amount of data that changed since the last backup cycle, and on your setup and target devices.
In my case, the full backup lasts 4 days, which is much longer than any transform operation included Reverse incremential. We know it because we tested it.
Maybe that is the case because only a few % of data changed compared to the previous restore point. Maybe between 20 to 60 GB of changed blocks. Thats almost nothing compared to the size of an Active Full.
It also depends what Veeam is actually doing when performing transform actions. The documentation is good and easy to read/understand. But it does not really go into details about that.
If the transform operation during Forever Forward would be, that Blocks are just injected into the Full-file, and the rest of this huge this file stays totally untouched/unread, maybe thats not too many IOPs that happen in our case then. Because not too many blocks have changed. The Veeam documentation also does not say wheather the Full backup file has to be "read through" totally block-by-block during transformation, which would consume also a lot of time.
We also tested now, that a Health check of this full backup lasts 6 hours. If there would have been additional restore points, it would have been a bit longer of course.
But still a Forever Forward + periodically Health check should be ok i guess, considering the amount of time it needs.
Doing a Full backup is an emergency situation here. Because that means you might have to delete the previous Full first, to free disk space. And wait for for 4 days, in which you dont have any backups, to complete the new Full backup.
But i do understand your point: During any type of transforms, there are read AND write IOPs happening at the same time, which is normally slow on low-end NAS and slower spindle-type of disks.
But it believe it does not have to mean that its always faster than an active full, if the active full is really huge.
ND.
But i believe that depends on the amount of data that changed since the last backup cycle, and on your setup and target devices.
In my case, the full backup lasts 4 days, which is much longer than any transform operation included Reverse incremential. We know it because we tested it.
Maybe that is the case because only a few % of data changed compared to the previous restore point. Maybe between 20 to 60 GB of changed blocks. Thats almost nothing compared to the size of an Active Full.
It also depends what Veeam is actually doing when performing transform actions. The documentation is good and easy to read/understand. But it does not really go into details about that.
If the transform operation during Forever Forward would be, that Blocks are just injected into the Full-file, and the rest of this huge this file stays totally untouched/unread, maybe thats not too many IOPs that happen in our case then. Because not too many blocks have changed. The Veeam documentation also does not say wheather the Full backup file has to be "read through" totally block-by-block during transformation, which would consume also a lot of time.
We also tested now, that a Health check of this full backup lasts 6 hours. If there would have been additional restore points, it would have been a bit longer of course.
But still a Forever Forward + periodically Health check should be ok i guess, considering the amount of time it needs.
Doing a Full backup is an emergency situation here. Because that means you might have to delete the previous Full first, to free disk space. And wait for for 4 days, in which you dont have any backups, to complete the new Full backup.
But i do understand your point: During any type of transforms, there are read AND write IOPs happening at the same time, which is normally slow on low-end NAS and slower spindle-type of disks.
But it believe it does not have to mean that its always faster than an active full, if the active full is really huge.
ND.
-
- VP, Product Management
- Posts: 27375
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Reverse Incremental takes too long?
Yes, in this case you can try to switch from reversed incremental mode to the forward one and see how the backup job performance has been improved. If you could post the results of your comparison for other readers, that would be appreciated. Thanks!
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Re: Reverse Incremental takes too long?
I will post the results.
But i have to wait first, until the restore points reached the limit, where transform operations will happen in the Forever forward chain.
Regards, ND
But i have to wait first, until the restore points reached the limit, where transform operations will happen in the Forever forward chain.
Regards, ND
-
- Novice
- Posts: 6
- Liked: 1 time
- Joined: Oct 27, 2018 6:57 pm
- Full Name: Null Dev
- Contact:
Re: Reverse Incremental takes too long?
I promised to post the results. Here they are:
In my case the transform operation that is included with the forever forward process is faster compared to reverse incremental.
The reverse incremental is taking longer and longer with each backup cycle, while the forever forward + transform is usually approximately the same timespan in each backup cycle.
In my case with this small setup here, i will just stick to forever forward for now.
Thanks for the help.
Regards,
ND.
In my case the transform operation that is included with the forever forward process is faster compared to reverse incremental.
The reverse incremental is taking longer and longer with each backup cycle, while the forever forward + transform is usually approximately the same timespan in each backup cycle.
In my case with this small setup here, i will just stick to forever forward for now.
Thanks for the help.
Regards,
ND.
Who is online
Users browsing this forum: Bing [Bot] and 70 guests