-
- Expert
- Posts: 184
- Liked: 18 times
- Joined: Feb 15, 2013 9:31 pm
- Full Name: Jonathan Barrow
- Contact:
If you had unlimited storage, would disabling compression..
If you had unlimited storage, would disabling compression be a way to speed up your job creation and recovery time? I mean, just like creating or decompressing a zip file. The higher the level of compression, the longer that process takes.
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: If you had unlimited storage, would disabling compressio
Yes, that`s why we recommend to disable in-job compression if you use dedupe repository. But if you are eager to speed up your job, it`s better to start from bottleneck analysis.
Thanks!
Thanks!
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: If you had unlimited storage, would disabling compressio
I wouldn't be so sure that disabling compression may help to improve performance...let me explain.
These days, CPU are really powerful, and compression algorithms like lz4 (the one used by default in Veeam) are light on the CPU. On the other side, spinning disks are usually the slowest component in the entire datacenter, orders of magnitude slower than cpu, memory, even network.
So, by enabling compression, and assuming a 2x data reduction thanks to it, you are going to write half of data to the slowest component of the infrastructure, thus lowering the load on it.
On restore, same concepts can be applied: you need to read half the amount of data from the storage, and the fast cpu you have can speed up the decompression of the blocks you are reading, probably faster than having to read double the amount of an uncompressed block.
Sending uncompressed data to dedupe devices is a corner case, since those devices work better with uncompressed data, and can reach better dedupe ratio.
These days, CPU are really powerful, and compression algorithms like lz4 (the one used by default in Veeam) are light on the CPU. On the other side, spinning disks are usually the slowest component in the entire datacenter, orders of magnitude slower than cpu, memory, even network.
So, by enabling compression, and assuming a 2x data reduction thanks to it, you are going to write half of data to the slowest component of the infrastructure, thus lowering the load on it.
On restore, same concepts can be applied: you need to read half the amount of data from the storage, and the fast cpu you have can speed up the decompression of the blocks you are reading, probably faster than having to read double the amount of an uncompressed block.
Sending uncompressed data to dedupe devices is a corner case, since those devices work better with uncompressed data, and can reach better dedupe ratio.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: If you had unlimited storage, would disabling compressio
Luca is almost correct, except in the last line.
That is not correct. Even with dedupe repository, our recommendation is to keep the default compression enabled in the job, and use the "decompress before saving" option on repository instead.Shestakov wrote:Yes, that`s why we recommend to disable in-job compression if you use dedupe repository. But if you are eager to speed up your job, it`s better to start from bottleneck analysis. Thanks!
That is correct. Higher level of compression will slow down the job. However, the default one is specifically optimized for low CPU usage and fastest processing, so you essentially get 2x reduction of data that needs to be moved around almost "for free", so you do want to have this one enabled in all scenarios.jbarrow.viracoribt wrote:The higher the level of compression, the longer that process takes.
What's New in v7 wrote:Hardware-accelerated compression. A new default compression level with a proprietary algorithm implementation leverages advanced CPU instruction sets (SSE extensions). This reduces backup proxy CPU usage up to 10 times when compared to the previous default compression level.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: If you had unlimited storage, would disabling compressio
I can confirm default compression & decompress before saving gives best performance and savings on MS 2012R2 dedupe appliances.Gostev wrote: Shestakov wrote:Yes, that`s why we recommend to disable in-job compression if you use dedupe repository. But if you are eager to speed up your job, it`s better to start from bottleneck analysis. Thanks!
That is not correct. Even with dedupe repository, our recommendation is to keep the default compression enabled in the job, and use the "decompress before saving" option on repository instead.
Who is online
Users browsing this forum: Bing [Bot], Paul.Loewenkamp and 43 guests