-
- Influencer
- Posts: 10
- Liked: 2 times
- Joined: Aug 07, 2020 6:25 am
- Contact:
Restore percentage calculation is wrong
I'm currently doing a restore of 13.9TB of data from a backup that was taken from a 32TB volume to another drive that is only 28TB in size.
The restore process is nearly done, with 13.6TB restored but reports that it is only 42% done. Out of three calculations, this is the WORST one and makes zero sense.
Options:
1) Data restored/Data total (this is the only formula that should have been used. 13.9TB written = 100% = done).
2) Data restored/shrunken partition size (its wrong and almost useless. 13.9TB written = ~50% done = stupid.)
3) Data restored/original volume size (this is the current formula which makes zero sense. When the last byte is written, it'll go from like 43% to 100%.)
Having the wrong formula misleads the user and prevents them from easily seeing approximately how much time is left. They have to notice the bug and do the math in their head. This way, when I first started the restore, I would know the restore process would take less than 28 hours, not more than 50 hours.
I'm curious why backup/restore software has so much problems with backup/restore percentages (Acronis has been bad for years with wildly inaccurate estimates and still bad). I would have expected a QA person to have spotted this and have it fixed since its glaringly obvious and a much used feature of the product. In this case, it's literally supposed to be as simple as (data_written/total_data_to_be_written) and nothing complicated.
Best regards
The restore process is nearly done, with 13.6TB restored but reports that it is only 42% done. Out of three calculations, this is the WORST one and makes zero sense.
Options:
1) Data restored/Data total (this is the only formula that should have been used. 13.9TB written = 100% = done).
2) Data restored/shrunken partition size (its wrong and almost useless. 13.9TB written = ~50% done = stupid.)
3) Data restored/original volume size (this is the current formula which makes zero sense. When the last byte is written, it'll go from like 43% to 100%.)
Having the wrong formula misleads the user and prevents them from easily seeing approximately how much time is left. They have to notice the bug and do the math in their head. This way, when I first started the restore, I would know the restore process would take less than 28 hours, not more than 50 hours.
I'm curious why backup/restore software has so much problems with backup/restore percentages (Acronis has been bad for years with wildly inaccurate estimates and still bad). I would have expected a QA person to have spotted this and have it fixed since its glaringly obvious and a much used feature of the product. In this case, it's literally supposed to be as simple as (data_written/total_data_to_be_written) and nothing complicated.
Best regards
-
- Influencer
- Posts: 10
- Liked: 2 times
- Joined: Aug 07, 2020 6:25 am
- Contact:
Re: Restore percentage calculation is wrong
Ok, I don't understand how/what Veeam is currently restoring. [ Note, reading the notice about needing to open a case before opening a thread, I think this does fall under user asking question for understanding as this appears to be how Veeam restore operates...]
I have approximately 14TB of data being restored to a 32TB (original)->28TB (new) drive shrunken by Veeam during restore. When selecting the backup file, Veeam showed it as a 13.9TB backup. In Windows, Size is 13.8TB, or 15.175TB on disk. So I think we're past all the ways to measure bits as being a conversion issue.
When I started the restore process, DU Meter showed the NIC doing 1.2Gbps (using 10Gb nics). This jives with the ~156MB/s transfer speed shown by Veeam. By that calculation, that's roughly 0.5TB/hour, so roughly 28 hours. Today, I noticed that the transfer has mostly been right at 1Gbps, with some times around 600Mbps. It's around 800Mbps at the time of this post and showing 148MB/s (it doesn't show current speed, it looks like average speed over the entire transfer so far).
What I don't understand, is that it says that its restored 15.8TB at 148MB/s (49% done) over a duration of 31 hours, AND IT HASN'T FINISHED.
1. Why didn't it stop at 13.9TB? My expectation is that it would stop after about 14TB (e.g. VSS changes during the backup process), make the partition active and show something like 14TB free space.
2. If this was a sector by sector restore, I would expect the backup file to be 32TB and not allow a shrunken partition of 28TB. I know its not sector by sector restore, just answering it before someone mentions it.
Since the volume being restored right now is showing "RAW" in disk manager, I don't know how much data has actually been written back to disk. But it's past when it should have been done, and I don't know what its transferring at nearly a gigabit ABOVE the 13.9TB of real data.
tl;dr it's looking like instead of taking 28 hours it will take 60 hours, BUT I DON'T KNOW WHY IT SHOULD TAKE MORE THAN 28 HOURS. My biggest concern is that in 30 hours it will error out and I'll have wasted a week and will have to switch to painstakingly restoring folder by folder.
Thanks in advance!
I have approximately 14TB of data being restored to a 32TB (original)->28TB (new) drive shrunken by Veeam during restore. When selecting the backup file, Veeam showed it as a 13.9TB backup. In Windows, Size is 13.8TB, or 15.175TB on disk. So I think we're past all the ways to measure bits as being a conversion issue.
When I started the restore process, DU Meter showed the NIC doing 1.2Gbps (using 10Gb nics). This jives with the ~156MB/s transfer speed shown by Veeam. By that calculation, that's roughly 0.5TB/hour, so roughly 28 hours. Today, I noticed that the transfer has mostly been right at 1Gbps, with some times around 600Mbps. It's around 800Mbps at the time of this post and showing 148MB/s (it doesn't show current speed, it looks like average speed over the entire transfer so far).
What I don't understand, is that it says that its restored 15.8TB at 148MB/s (49% done) over a duration of 31 hours, AND IT HASN'T FINISHED.
1. Why didn't it stop at 13.9TB? My expectation is that it would stop after about 14TB (e.g. VSS changes during the backup process), make the partition active and show something like 14TB free space.
2. If this was a sector by sector restore, I would expect the backup file to be 32TB and not allow a shrunken partition of 28TB. I know its not sector by sector restore, just answering it before someone mentions it.
Since the volume being restored right now is showing "RAW" in disk manager, I don't know how much data has actually been written back to disk. But it's past when it should have been done, and I don't know what its transferring at nearly a gigabit ABOVE the 13.9TB of real data.
tl;dr it's looking like instead of taking 28 hours it will take 60 hours, BUT I DON'T KNOW WHY IT SHOULD TAKE MORE THAN 28 HOURS. My biggest concern is that in 30 hours it will error out and I'll have wasted a week and will have to switch to painstakingly restoring folder by folder.
Thanks in advance!
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Restore percentage calculation is wrong
Hello,
From the screenshot, the percent value looks okay to me. 15.9TB of 32TB restored is pretty close to 49%
Best regards,
Hannes
the software does block-level backup (per default) and volume level restore is also block-based. The backup file is smaller because of automatically applied compression.If this was a sector by sector restore, I would expect the backup file to be 32TB
From the screenshot, the percent value looks okay to me. 15.9TB of 32TB restored is pretty close to 49%
Best regards,
Hannes
-
- Influencer
- Posts: 10
- Liked: 2 times
- Joined: Aug 07, 2020 6:25 am
- Contact:
Re: Restore percentage calculation is wrong
(There wasn't much compression. These were all mostly backups, videos and programs. The backup and what Windows reports for the volume size is 13.8TB. I regret not changing the setting to no compression but I thought that those things get analyzed so they're not wasted on compression anyway). Ok, I was under the impression other software also does block-level but only needs to write the actual data. That's why you can skip the recycling bin and apply filters and have certain file types/extensions not copied. I must be mistaken. Either way, that's a huge negative and speed penalty if I only had 1TB of data on a 32TB partition and the restore still took 35 hours. That's just not ideal. I still don't understand what data it would be copying after the 13.9TB of actual data, but isn't padding out the backup file to a larger size and still requires network transfer... But thanks for responding, much appreciated.
It finished after 35 hours, 11 minus and 26 seconds.
27.3TB restored at 226MB/s (100% done).
It finished after 35 hours, 11 minus and 26 seconds.
27.3TB restored at 226MB/s (100% done).
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Restore percentage calculation is wrong
Hello,
Summary from my side: the percentage calculations are correct (as shown in your screenshot and also in mine). If you see it different, I need more explanation.
Best regards,
Hannes
that applies for file-level based backup (not default setting and would prevent volume-level restore). with file-level backup, only file-level restore would be available.That's why you can skip the recycling bin and apply filters and have certain file types/extensions not copied.
did you see something like that (if yes, what's the Veeam support case number)? Because restoring zeroes is faster (and the percentage value goes up faster). The screenshot below shows restore of 4GB data to a 1 TB disk. Almost everything is empty. My environment is by far too slow to really do 5GByte/s. That's the "zero restore".Either way, that's a huge negative and speed penalty if I only had 1TB of data on a 32TB partition and the restore still took 35 hours.
good to hearIt finished after 35 hours, 11 minus and 26 seconds.
Summary from my side: the percentage calculations are correct (as shown in your screenshot and also in mine). If you see it different, I need more explanation.
Best regards,
Hannes
-
- Influencer
- Posts: 10
- Liked: 2 times
- Joined: Aug 07, 2020 6:25 am
- Contact:
Re: Restore percentage calculation is wrong
Why would it be writing zero's??? It's free space, the OS doesn't care if its zero's, FF's or 55's. It's unused. It's not referenced and would likely already be zero's as a virgin drive, regardless. Like I said, this isn't sector by sector copying. I can't say I've used every other backup/restore app (say, 5-6), but I pretty much only ever do volume backup/restores and this is the first time ever having it write more data than actual data. If it's not data, there is no point in writing it. That's why I'm confused - what was being transferred over the network after my real data was transferred? And if it was transferring highly compressed zero's, the throughput would have gone up quite a bit from the reported number (it would also increase the network transfer speed because its not reading from disk but from RAM/cache, which it didn't, it was transferring slower than at the start), which was similar to what was being transferred across the network.
So it sounds like its showing how it works, and I just don't like how it works and therefore how it displays the data. If I understood what/why it had to write the entire disk instead of just writing the actual data, writing out the partition tables and being done, maybe the progress numbers would make more sense. But at the very least, its going by the original volume size, not the destination size, so that is funky. /shrug
tl;dr the performance wasn't as good as I expected and it took longer than expected.
So it sounds like its showing how it works, and I just don't like how it works and therefore how it displays the data. If I understood what/why it had to write the entire disk instead of just writing the actual data, writing out the partition tables and being done, maybe the progress numbers would make more sense. But at the very least, its going by the original volume size, not the destination size, so that is funky. /shrug
tl;dr the performance wasn't as good as I expected and it took longer than expected.
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Restore percentage calculation is wrong
agree, my wording above is bad around the "zero restore". The software does not write zeroes. As I mentioned above, my hardware cannot write at 5GByte/s.
The % value are calculated from the size of the volume. That sounds logical to me and I cannot find complaints from others about doing calculations based on the volume size (there are other complaints about other values we show). So it looks like most customers are fine with that calculation.
on performance: usually speed is limited by the hardware. fragmentation of the file system of the backup storage also plays a role. I have seen a customer doing volume restore around 10GBit/s (network limit, maybe also storage limit) some years ago. So the software in general is able to write faster. It's possible to ask Veeam support to check on performance issues.
The % value are calculated from the size of the volume. That sounds logical to me and I cannot find complaints from others about doing calculations based on the volume size (there are other complaints about other values we show). So it looks like most customers are fine with that calculation.
Only real data is transferred over the network. The 226MByte/s value is the average of real data and "zeroes". 226x3600*35/1024 is about the 27TB. It all sounds logical to me. % values fit in all screenshots. Not sure how I can help more.what was being transferred over the network after my real data was transferred?
on performance: usually speed is limited by the hardware. fragmentation of the file system of the backup storage also plays a role. I have seen a customer doing volume restore around 10GBit/s (network limit, maybe also storage limit) some years ago. So the software in general is able to write faster. It's possible to ask Veeam support to check on performance issues.
Who is online
Users browsing this forum: No registered users and 15 guests