-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
feature request: split vbk
Hi!
Would it be possible to have an option to split huge vkb files in to smaller ones, so they could be picked up by windows deduplication?
Now I have to split the backup copy jobs in the smaller ones to keep vbks below 1TB.
Would it be possible to have an option to split huge vkb files in to smaller ones, so they could be picked up by windows deduplication?
Now I have to split the backup copy jobs in the smaller ones to keep vbks below 1TB.
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: feature request: split vkb
Hi,
have you checked per-vm backup chains?
https://helpcenter.veeam.com/backup/vsp ... files.html
You can have multiple backup files without splitting the backup job.
have you checked per-vm backup chains?
https://helpcenter.veeam.com/backup/vsp ... files.html
You can have multiple backup files without splitting the backup job.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vkb
Wouldnt that effectively disable VBR's inline deduplication?
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: feature request: split vkb
I wrote a post on this topic:
http://www.virtualtothecore.com/en/veea ... up-chains/
http://www.virtualtothecore.com/en/veea ... up-chains/
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
So.. the post basically says "yes, you will lose vbr's deduplicating, but it doesnt matter, since it's practically useless" ?
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: feature request: split vbk
No,the opposite. We say that data reduction comes from source-side deduplication and compression, so the switch to per-vm chains will not affect the final data reduction too much.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
Ok.. I'm a bit confused here.
In my cases, I'm close to 2x dedupe ratio on VBR (mainly windows servers). How much more space I should reserve for my backups if I enable per-vm backup chains?
E: I think I got it. So in per-vm backup chains, the deduplication is similar if I had only one VM / job? So I just basically have to test it to see how much more space it requires.
Anyway, this still does not solve the problem where the single VM's backup is larger than 1TB.
In my cases, I'm close to 2x dedupe ratio on VBR (mainly windows servers). How much more space I should reserve for my backups if I enable per-vm backup chains?
E: I think I got it. So in per-vm backup chains, the deduplication is similar if I had only one VM / job? So I just basically have to test it to see how much more space it requires.
Anyway, this still does not solve the problem where the single VM's backup is larger than 1TB.
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: feature request: split vbk
Do you probably mean overall data reduction compared to the size of source VMs?hyvokar wrote:In my cases, I'm close to 2x dedupe ratio on VBR (mainly windows servers).
That's correct.hyvokar wrote:E: I think I got it. So in per-vm backup chains, the deduplication is similar if I had only one VM / job? So I just basically have to test it to see how much more space it requires.
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
The dedupe ratio from the backup report. Compression not included.foggy wrote:Do you probably mean overall data reduction compared to the size of source VMs?
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- Expert
- Posts: 149
- Liked: 15 times
- Joined: Jan 02, 2015 7:12 pm
- Contact:
Re: feature request: split vbk
I would also like the ability to chop up .vbk files (I use per vm chains) into small files. I have vms that are multiple tbs in size, and this causes issues for replication of those files. While the per vm chains greatly reduced the issue of not all files being replicate din a timely manner, I still have those massive vms still causing issues. It would be nice to split the .vbk files for those large vms into smaller more managable bites (pardon pun) in order to replicate more easily. Example, a setting in the backup job to split backup files larger than 1tb into some user configurable sub grouping, for example 250gb. This is a good idea.
-Nick
-Nick
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: feature request: split vbk
Would per-disk backup files address this to some extent?
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
No, not really. Often the case is that the VM has 50-60GB OS disk, and huge (multi tb) data disk.foggy wrote:Would per-disk backup files address this to some extent?
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
I did a test using a 3 medium sized VMs. I created two identical backup copy jobs and targetted them to different repositories, one "normal" and one using per-vm backup files.foggy wrote: That's correct.
The results was a bit surprising. The one with per-vm backup files was *smaller* than one with single huge backup file. No idea how that's possible.
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: feature request: split vbk
Surprising indeed.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: feature request: split vbk
Per VM backup chains work beautifuly, combined with scale-out repositories fulls can now be seperated from the incrementals which in a lot of cases mean windows dedupe enige has more than enough time to complete it's dedupe cycle on the fulls since the repository will now only be used once a week / month. However it's still no solution for vm's with big disks. Altough sometimes a big pain we have now started doing complicated things to be able to split up data on individual vm's, not only for backup purposes but espacially for restore. Ever tried to instant recover and then vmotion a 3tb+ vm? I can assure you you will be sweating all day
-
- Influencer
- Posts: 17
- Liked: 2 times
- Joined: Jul 23, 2012 4:28 am
- Full Name: Mike Smith
- Contact:
Re: feature request: split vbk
Hello,
+1 on this feature request
I too would be keen on being able to split large multi TB backup files. Several servers here with 2TB+ data drives.
+1 on this feature request
I too would be keen on being able to split large multi TB backup files. Several servers here with 2TB+ data drives.
-
- Veteran
- Posts: 411
- Liked: 31 times
- Joined: Nov 21, 2014 10:05 pm
- Contact:
Re: feature request: split vbk
Another test in production environment. The compacted backup file size was 1.48TB. After enabling the per-vm backup files, the combined size of new backup files was 1,35TB. Makes you wonder if the compact really works.hyvokar wrote:
I did a test using a 3 medium sized VMs. I created two identical backup copy jobs and targetted them to different repositories, one "normal" and one using per-vm backup files.
The results was a bit surprising. The one with per-vm backup files was *smaller* than one with single huge backup file. No idea how that's possible.
Bed?! Beds for sleepy people! Lets get a kebab and go to a disco!
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
MS MCSA, MCITP, MCTS, MCP
VMWare VCP5-DCV
Veeam VMCE
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
+1 here too, this feature would be good, as Dedup of large files is not possible.
We used per VM Backup Chain, but still we do have one VM with about 2,7 TB Data, that can't ever go under 1 TB.
We used per VM Backup Chain, but still we do have one VM with about 2,7 TB Data, that can't ever go under 1 TB.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: feature request: split vbk
Patrick, why do you need to go under 1TB, what is the use case here?
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
Windows can't dedup files larger than 1 TB, also not with new Windows Server 2016.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: feature request: split vbk
Hi,
Thanks
As far as I know it's not a hard limit, is it?Windows can't dedup files larger than 1 TB, also not with new Windows Server 2016.
Thanks
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
I thought so too, but it deduped any backup file but didn't the one with 2,3 TB.
And the dedup job is stopped, until new backup files enter the right age. This one is already 6 days old.
But I didn't find any evidence from microsoft, stating that is true, only that files up to 1 TB now are highly performant.
At Windows 2016, I said, the dedup type is virtual backup server as it sounded right. Didn't find much info about that either,
so was just guessing.
Thanks
Patrick
And the dedup job is stopped, until new backup files enter the right age. This one is already 6 days old.
But I didn't find any evidence from microsoft, stating that is true, only that files up to 1 TB now are highly performant.
At Windows 2016, I said, the dedup type is virtual backup server as it sounded right. Didn't find much info about that either,
so was just guessing.
Thanks
Patrick
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
Anyway, it's never good to use Dedup, if Microsoft states, it is "unsupported".
This is, why I also request that feature, as Veeam doesn't have Dedup on it's own.
Just added more RAM to the server, to see if it's dedupping it now, seems that
it didn't make any change, will go to MS Forum and see if I get any answer there why this file is skipped.
But also found an entry about Windows 2016 and troubles with Dedup of large files:
haven't tested it myself.
https://blog.zoomik.pri.ee/uncategorize ... uge-files/
This is, why I also request that feature, as Veeam doesn't have Dedup on it's own.
Just added more RAM to the server, to see if it's dedupping it now, seems that
it didn't make any change, will go to MS Forum and see if I get any answer there why this file is skipped.
But also found an entry about Windows 2016 and troubles with Dedup of large files:
haven't tested it myself.
https://blog.zoomik.pri.ee/uncategorize ... uge-files/
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: feature request: split vbk
Windows Server 2016 dedupe will dedupe files up to 4TB based on our testing, and skip bigger files.
Also, 1TB limit is for changing files only, not for read-only files such as Veeam backups.
I've actually discussed this directly with the dev team behind this tech some months ago...
This blog you shared uses some information from my earlier posts on this forum.
Also, 1TB limit is for changing files only, not for read-only files such as Veeam backups.
I've actually discussed this directly with the dev team behind this tech some months ago...
This blog you shared uses some information from my earlier posts on this forum.
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
thanks for your info.
well lets see what happens with the next 2,4 TB Full Backup, if that's going to be deduped.
was done yesterday, so should see results in some days. anyway, this is off topic then if you say up to 4 TB should work.
Anyway, do you recommend already using server 2016 and dedup with Veeam?
well lets see what happens with the next 2,4 TB Full Backup, if that's going to be deduped.
was done yesterday, so should see results in some days. anyway, this is off topic then if you say up to 4 TB should work.
Anyway, do you recommend already using server 2016 and dedup with Veeam?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: feature request: split vbk
Universally, I never recommend jumping any newly released technology at all
When I feel an urge to do this myself, I usually start small and see how it goes.
When I feel an urge to do this myself, I usually start small and see how it goes.
-
- Enthusiast
- Posts: 86
- Liked: 7 times
- Joined: Sep 03, 2015 12:15 am
- Full Name: Patrick
- Contact:
Re: feature request: split vbk
Just as update, we got the 2,4 TB File to dedup, I tried a restore afterwards, it had "read errors". Some files worked (Guest files Restore) others didn't.
So bad idea using Dedup with Windows 2016 as it seems broken, at least for us.
I tested a smaller VM under 500 GB which seemed to work fine.
Will now keep using Windows 2012R2 and not using dedup, even I don't know how big files would perform with 2012R2 - never tested it on my own.
Probably 9.5 with ReFS is the better idea, after some Windows 2016 and Veeam Updates
Anyway wasoff topic but wanted to give feedback.
Probably splitting files will still come anytime in the future.
Thanks
Patrick
So bad idea using Dedup with Windows 2016 as it seems broken, at least for us.
I tested a smaller VM under 500 GB which seemed to work fine.
Will now keep using Windows 2012R2 and not using dedup, even I don't know how big files would perform with 2012R2 - never tested it on my own.
Probably 9.5 with ReFS is the better idea, after some Windows 2016 and Veeam Updates
Anyway wasoff topic but wanted to give feedback.
Probably splitting files will still come anytime in the future.
Thanks
Patrick
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: feature request: split vbk
This discussion would be totally unnecessary if Veeam would support splitting backup files we would all be able to sleep better i assume
I have a slight understanding of Veeam saying having 5 files is more complicated as having 1 file but i feel it's more a bit of a joke.
Even if, one could create a very secret really important hidden registry switch! No pun intended btw!
I have a slight understanding of Veeam saying having 5 files is more complicated as having 1 file but i feel it's more a bit of a joke.
Even if, one could create a very secret really important hidden registry switch! No pun intended btw!
-
- Service Provider
- Posts: 372
- Liked: 120 times
- Joined: Nov 25, 2016 1:56 pm
- Full Name: Mihkel Soomere
- Contact:
Re: feature request: split vbk
I am the author of abovementioned blog post.
Current dedup implementations are... mixed. Basically you have to choose and accept the trade-offs.
WS2012R2 seems to have no hard file size limit, BUT:
* General throughput is slow (single-threaded)
* If data is already in chunk store and it processes a secondary copy (for example secondary full backup), throughput is better.
* 1TB+ file processing gets slower, although it will eventually complete.
WS2016 is much faster, BUT
* 4TB data limit means huge losses for large VMs
* Data corruption issue with 1TB+ files. I haven't poked the product team for a few weeks but a hotfix was promised waaay back.
For maximum total savings you'd go for 64TB volumes but on WS2012R2 it would probably never finish processing if new data keeps pouring in. My experiences shows that on WS2012R2 in realistic scenarios it'd probably take at least a week to process (assuming no new data was written). Actually I'd say that WS2012R2 is even better in these scenarios (you have huge files and goal is maximum data savings).
Either way 1TB is supported maximum. Though product team also confirmed that in Forward Incremental style scenario (write-once, never modify) NTFS dedup will work fine past supported limits.
I'd happily foreit Veeam dedup and compression if we'd get (for example) 1TB VBK extents. Per-VM chains is great but modern VM's do keep growing in size. I have 10TB+ VBKs and probalby soon I'll get some in ~20TB range - and that's already deduped data (NTFS dedup in VM). Some of them can be split up with some work but others will have stay as they are.
Current dedup implementations are... mixed. Basically you have to choose and accept the trade-offs.
WS2012R2 seems to have no hard file size limit, BUT:
* General throughput is slow (single-threaded)
* If data is already in chunk store and it processes a secondary copy (for example secondary full backup), throughput is better.
* 1TB+ file processing gets slower, although it will eventually complete.
WS2016 is much faster, BUT
* 4TB data limit means huge losses for large VMs
* Data corruption issue with 1TB+ files. I haven't poked the product team for a few weeks but a hotfix was promised waaay back.
For maximum total savings you'd go for 64TB volumes but on WS2012R2 it would probably never finish processing if new data keeps pouring in. My experiences shows that on WS2012R2 in realistic scenarios it'd probably take at least a week to process (assuming no new data was written). Actually I'd say that WS2012R2 is even better in these scenarios (you have huge files and goal is maximum data savings).
Either way 1TB is supported maximum. Though product team also confirmed that in Forward Incremental style scenario (write-once, never modify) NTFS dedup will work fine past supported limits.
I'd happily foreit Veeam dedup and compression if we'd get (for example) 1TB VBK extents. Per-VM chains is great but modern VM's do keep growing in size. I have 10TB+ VBKs and probalby soon I'll get some in ~20TB range - and that's already deduped data (NTFS dedup in VM). Some of them can be split up with some work but others will have stay as they are.
-
- Influencer
- Posts: 14
- Liked: 4 times
- Joined: Sep 13, 2014 5:41 am
- Full Name: Sam Boutros
- Location: KOP, PA
- Contact:
[MERGED] Can Veeam be configured to use maximum file size?
Can Veeam be configured to have a maximum backup file size for backup files like .vbr and other Veeam backup files - such as 512 MB or 256 MB?
So that a 5 TB backup job would result in 10*512MB backup files instead of a single 5TB file?
Thank you
So that a 5 TB backup job would result in 10*512MB backup files instead of a single 5TB file?
Thank you
Sam Boutros, Senior Consultant, Software Logic, KOP, PA
http://superwidgets.wordpress.com
Powershell: Learn it before it's an emergency
http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx
http://superwidgets.wordpress.com
Powershell: Learn it before it's an emergency
http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx
Who is online
Users browsing this forum: chris.childerhose, massimiliano.rizzi and 159 guests