-
- Influencer
- Posts: 24
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Block Statistics
In Veeam Backup 2.0 I could get the block statistics from a replication job, but I don't see this feature in 3.0.
I would like to be able to know how many total blocks were processed, how many blocks had changed and therefore replicated, how many blocks had not changed and therefore skipped and finally, how many blocks were ignored for being zero length.
I used to be able to get that from the job log under an entry called <Blocks_Stat>. Has this been moved or removed?
This is very useful data for planning replication jobs as well as analyzing under-performing jobs.
Thanks,
Brett
I would like to be able to know how many total blocks were processed, how many blocks had changed and therefore replicated, how many blocks had not changed and therefore skipped and finally, how many blocks were ignored for being zero length.
I used to be able to get that from the job log under an entry called <Blocks_Stat>. Has this been moved or removed?
This is very useful data for planning replication jobs as well as analyzing under-performing jobs.
Thanks,
Brett
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Block Statistics
Brett, we've done some logging enhancements and optimizations in 3.0 so this is why the structure is changed now.
Try to look for "Collecting replica statistic" line for the same information - it should be there.
Try to look for "Collecting replica statistic" line for the same information - it should be there.
-
- Influencer
- Posts: 24
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: Block Statistics
Thanks Gostev for the quick reply.
I found the section you are referring to, but I am having a hard time understanding it.
Here is a log snippet from a replica job. This VM has two VMDKs first 40gb and second 250gb
So, the numbers that stick out at me are 8975257509 and 8974074218 in the first set (presumably the first hard drive?) and then 11045723, 311386512083, 100 and 95 in the second set.
311386512083 = Bytes? That would be 290Gb which is my total hard disk capacity, both together.
What are the other numbers then? Is there a key somewhere?
I found the section you are referring to, but I am having a hard time understanding it.
Here is a log snippet from a replica job. This VM has two VMDKs first 40gb and second 250gb
Code: Select all
[24.03.2009 19:26:45] <04> Info (Client) Service output: >\n
[24.03.2009 19:26:45] <01> Info Collecting replica statistic, replicaFile "/vmfs/volumes/4938210e-d5e86a60-e968-001b21114ef9/VeeamBackup/vsql01(vm-2981)/2009-03-24T131132.vrb"
[24.03.2009 19:26:45] <01> Info Agent command: "stat\n/vmfs/volumes/4938210e-d5e86a60-e968-001b21114ef9/VeeamBackup/vsql01(vm-2981)/2009-03-24T131132.vrb\n"
[24.03.2009 19:26:46] <04> Info (Client) Service output: 8975257509\n
[24.03.2009 19:26:46] <04> Info (Client) Service output: 8974074218\n
[24.03.2009 19:26:46] <04> Info (Client) Service output: 0\n
[24.03.2009 19:26:46] <04> Info (Client) Service output: 0\n
[24.03.2009 19:26:46] <04> Info (Client) Service output: >\n
[24.03.2009 19:26:46] <01> Info Got text: 8975257509\n8974074218\n0\n0\n
[24.03.2009 19:26:46] <01> Info Collecting replica statistic, replicaFile "/vmfs/volumes/4938210e-d5e86a60-e968-001b21114ef9/VeeamBackup/vsql01(vm-2981)/replica.vbk"
[24.03.2009 19:26:46] <01> Info Agent command: "stat\n/vmfs/volumes/4938210e-d5e86a60-e968-001b21114ef9/VeeamBackup/vsql01(vm-2981)/replica.vbk\n"
[24.03.2009 19:26:52] <04> Info (Client) Service output: 11045723\n
[24.03.2009 19:26:52] <04> Info (Client) Service output: 311386512083\n
[24.03.2009 19:26:52] <04> Info (Client) Service output: 100\n
[24.03.2009 19:26:52] <04> Info (Client) Service output: 95\n
[24.03.2009 19:26:52] <04> Info (Client) Service output: >\n
[24.03.2009 19:26:52] <01> Info Got text: 11045723\n311386512083\n100\n95\n
[24.03.2009 19:26:52] <01> Info Disposing client from thread 1
[24.03.2009 19:26:52] <04> Info (Client) Service: closed
311386512083 = Bytes? That would be 290Gb which is my total hard disk capacity, both together.
What are the other numbers then? Is there a key somewhere?
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Block Statistics
Yes, you are correct, these are the numbers you are looking for.
1. Backup size in bytes (VBK size if full pass, VRB size if incremental pass).
2. Source data size in bytes (source VM size if full pass, changed source data size if incremental pass).
3. Dedup ratio in percent.
4. Compression ratio in percemt.
Hope this helps!
1. Backup size in bytes (VBK size if full pass, VRB size if incremental pass).
2. Source data size in bytes (source VM size if full pass, changed source data size if incremental pass).
3. Dedup ratio in percent.
4. Compression ratio in percemt.
Hope this helps!
-
- Influencer
- Posts: 24
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: Block Statistics
Yes, that is helpful. Thanks.
The key I was missing was the two separate sets of statistics. One for VRB and one for VBK. It was confusing to see VBK statistics every time even though it was a differential pass.
One question. Are the statistics for the VBK meaninful at all on a differential pass? It looks like its always around 10Mb in my case, and my understaning of replication is that the full backup is actually integrated into the VMDK(s) rather than a VBK file anyway.
Also, in 2.0 I was able to see the blocks_stat for each volume separately. Now it seems I can only see a grand total in bytes. For my current project I think that will be fine, but I could see it being useful to see the changes by volume.
Lastly, a feature request. I'd like to see a detail report for replication, much like there is for backup. It would contain each replication pass, start and stop times, total bytes processed, deduped bytes and compressed bytes.
The key I was missing was the two separate sets of statistics. One for VRB and one for VBK. It was confusing to see VBK statistics every time even though it was a differential pass.
One question. Are the statistics for the VBK meaninful at all on a differential pass? It looks like its always around 10Mb in my case, and my understaning of replication is that the full backup is actually integrated into the VMDK(s) rather than a VBK file anyway.
Also, in 2.0 I was able to see the blocks_stat for each volume separately. Now it seems I can only see a grand total in bytes. For my current project I think that will be fine, but I could see it being useful to see the changes by volume.
Lastly, a feature request. I'd like to see a detail report for replication, much like there is for backup. It would contain each replication pass, start and stop times, total bytes processed, deduped bytes and compressed bytes.
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Block Statistics
Brett, you may disregard replica.vbk statistics for the incremental pass. This file only consists of replica meta information.
The feature you are asking for is quite commonly requested - we will definitely add such statistics as we enhance our replication.
The feature you are asking for is quite commonly requested - we will definitely add such statistics as we enhance our replication.
-
- Influencer
- Posts: 24
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: Block Statistics
Thanks Gostev, for the information.
I have a few observations I'd like to see if I can get confirmed.
It looks like on backups and full replication passes, the compression setting can make a big difference on file size.
However, on incremental replication passes it appears to have no real effect. Is this by design?
Why wouldn't I want to be able to compress the changed bytes before sending them to target host?
On another note, as I mentioned above there isn't alot of useful statistics, and what data there is is fairly hard to get to. To help in sorting through log files, I wrote a little powershell script that will extract all the statistic sections out of a supplied log file.
I'll include it here in case anyone else finds it useful.
This is geared toward finding stats on replication passes (incremental) and therefore ignores VBK file statistics. This can be easily changed though by changing where it says "($_.Contains(".vrb")))" to "($_.Contains(".vbk")))" or omitting this filter entirely to get both sets of statistics together. (if they exist)
**Note that this script is provided as is. Use at your own risk.
I have a few observations I'd like to see if I can get confirmed.
It looks like on backups and full replication passes, the compression setting can make a big difference on file size.
However, on incremental replication passes it appears to have no real effect. Is this by design?
Why wouldn't I want to be able to compress the changed bytes before sending them to target host?
On another note, as I mentioned above there isn't alot of useful statistics, and what data there is is fairly hard to get to. To help in sorting through log files, I wrote a little powershell script that will extract all the statistic sections out of a supplied log file.
I'll include it here in case anyone else finds it useful.
Code: Select all
################################
#Log Reader for Veeam Log files#
################################
#
#This powershell script will take a Veeam Backup log file (as -logfile argument) and then
#scan it for replication statistics, these relevant sections will then be formatted and
#returned either to the host, or can be piped to other outputs.
#
#Recommended: out-file
#
#Example:
#./VeeamLogReader.ps1 -logfile "$env:userprofile\locals~1\applic~1\Veeam\Backup\Job_TestJob.log"|out-file VeeamLogStatistics.txt
#
#
#
param([string]$logfile)
$grab = 0
$cnt = 0
get-content $logfile |
foreach-object {if ($_.Contains("Collecting replica statistic") -and ($_.Contains(".vrb")))
{
#Found Statistic section, now set flag to grab next 6 lines
$grab = 1
$cnt = 0
}
if (($grab -eq 1) -and ($cnt -lt 6))
{
$out = $_ -replace "\\n",""
write-output $out
$cnt = $cnt + 1
}
}
**Note that this script is provided as is. Use at your own risk.
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Block Statistics
Brett, since incremental passes pickup only changed blocks, these blocks in most cases contain actual data and no white spaces, so compression does not give as good results as during full pass when the processed data has a lotsa white space. As for compression on source before sending data to target - we actually do this.
Who is online
Users browsing this forum: Baidu [Spider], bct44, Bing [Bot], Thomas_ and 114 guests