-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
Block Size Question
Hi there,
As I'm doing block alignments on my VMs (not a true block alignment, I'm repartitioning everything to use 32k blocks per recommendations from vendor), it comes to mind that I don't know how Veeam pulls data from them. We use Hot Add at the moment, however I'm contemplating putting Veeam on a physical box and plugging it into the storage array for Direct SAN.
So, for both, a question: how does Veeam determine what block size it is going to pull from the disk? It may be a stupid question, as it could very well just be information which is visible from either the LUN or the guest file system, I quite simply do not know.
And a follow-up question, if it doesn't determine it via either the guest file system or the storage array, is it possible to tweak it to what I know the LUN block size is?
As I'm doing block alignments on my VMs (not a true block alignment, I'm repartitioning everything to use 32k blocks per recommendations from vendor), it comes to mind that I don't know how Veeam pulls data from them. We use Hot Add at the moment, however I'm contemplating putting Veeam on a physical box and plugging it into the storage array for Direct SAN.
So, for both, a question: how does Veeam determine what block size it is going to pull from the disk? It may be a stupid question, as it could very well just be information which is visible from either the LUN or the guest file system, I quite simply do not know.
And a follow-up question, if it doesn't determine it via either the guest file system or the storage array, is it possible to tweak it to what I know the LUN block size is?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Block Size Question
Erik, block size used to process VM data can be specified in the job settings.
-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
Re: Block Size Question
Thanks, but I was referring to pulling from the source, not writing to the destination.
For example, the VM vm01, which I want to back up, has 1 VMDK. Said VMDK is stored on a LUN with a 32k block size. We'll say datastore_vm01 houses vm01.vmdk
Veeam hot-adds the VMFS datastore and the VMDK to the proxy.
How does it know what block size to do its I/O with.
For example, the VM vm01, which I want to back up, has 1 VMDK. Said VMDK is stored on a LUN with a 32k block size. We'll say datastore_vm01 houses vm01.vmdk
Veeam hot-adds the VMFS datastore and the VMDK to the proxy.
How does it know what block size to do its I/O with.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Block Size Question
That's exactly the block size that Veeam B&R uses to read data from the source datastore.
-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
Re: Block Size Question
Ah. Well in that case, is there any way to change it to a more applicable value? Particularly, 32k to 64k?
After a support call with our storage vendor, it was identified that the array which operates in 32k blocks (or 64k blocks for things like SQL data) is getting a bit overwhelmed when a guest makes a bunch of requests for small block sizes (specifically the default windows block size of 4k). Since our jobs are all set up using local target, this is 4x worse than even a 4k block size.
I recognize that dedup is affected by this, as it's harder to dedup larger blocks, but performance is more important to us than a bit of wasted disk space.
After a support call with our storage vendor, it was identified that the array which operates in 32k blocks (or 64k blocks for things like SQL data) is getting a bit overwhelmed when a guest makes a bunch of requests for small block sizes (specifically the default windows block size of 4k). Since our jobs are all set up using local target, this is 4x worse than even a 4k block size.
I recognize that dedup is affected by this, as it's harder to dedup larger blocks, but performance is more important to us than a bit of wasted disk space.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Block Size Question
The underlying VMFS is always creating blocks at 1MB each (sub blocks at 8k are for files smaller than 8k itself). The 32K block is the amount of blocks retrieved per single IO, while "local target" set at 1MB is because final blocks are compared at this size for the dedupe and compression activities.
But if the storage is striped at 32k and Veeam retrieves data at 32k, I'd say the alignment between the two is correct. Or I'm missing something.
Luca
But if the storage is striped at 32k and Veeam retrieves data at 32k, I'd say the alignment between the two is correct. Or I'm missing something.
Luca
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
Re: Block Size Question
I could be wrong, but I believe I read a document which says that the VMFS block size is more or less irrelevant; it will not pull a 1MB block for each guest IO, rather it will forward the guest IO to the storage array as-is. A 32k IO from a windows guest for example will hit the storage array as a 32k IO.
My question, and so far how I've interpreted answers, is less about alignment (my fault for using that word) and more about matching block sizes.
It sounds like it reads the data dependent on the previously mentioned Job settings. With a max block size of 8K, they will not be misaligned (it'll still be a 1:1 read operation with no single guest IO requiring more than one storage IO) yet the storage array will still be retrieving 4x the size of the requested block.
This wouldn't be a huge issue with a traditional spindle-based storage array, but this one being a flash hybrid array, makes it a bit of an issue. It will cache data, based on some algorithm. In the case of an 8k read from the B&R server, it's only technically caching 25% of the data - the other 75% of what it's caching is junk in that block.
If Veeam does 1MB of IO at 8k, it's going to need 128 IOs.
The storage array conversely does 4MB of IO (128 IOs * 32k block sizes), caching 4MB into flash/memory so that Veeam can get the 1MB.
To make the numbers look scarier, consider Veeam doing 1TB of IO at 8k - the example holds true at any size.
My question, and so far how I've interpreted answers, is less about alignment (my fault for using that word) and more about matching block sizes.
It sounds like it reads the data dependent on the previously mentioned Job settings. With a max block size of 8K, they will not be misaligned (it'll still be a 1:1 read operation with no single guest IO requiring more than one storage IO) yet the storage array will still be retrieving 4x the size of the requested block.
This wouldn't be a huge issue with a traditional spindle-based storage array, but this one being a flash hybrid array, makes it a bit of an issue. It will cache data, based on some algorithm. In the case of an 8k read from the B&R server, it's only technically caching 25% of the data - the other 75% of what it's caching is junk in that block.
If Veeam does 1MB of IO at 8k, it's going to need 128 IOs.
The storage array conversely does 4MB of IO (128 IOs * 32k block sizes), caching 4MB into flash/memory so that Veeam can get the 1MB.
To make the numbers look scarier, consider Veeam doing 1TB of IO at 8k - the example holds true at any size.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Block Size Question
Probably I need to re-quote this:
The 1MB block size of Local Target option is the size of the block used in Veeam to do deduplication and compression, but it happens after it's retrieved from the production storage.
So, as I said, Veeam requests blocks at 32k. If the storage reads blocks at 32k, it's the same size, and there's no read amplification.foggy wrote:That's exactly the block size that Veeam B&R uses to read data from the source datastore.
The 1MB block size of Local Target option is the size of the block used in Veeam to do deduplication and compression, but it happens after it's retrieved from the production storage.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Expert
- Posts: 203
- Liked: 34 times
- Joined: Jul 26, 2012 8:04 pm
- Full Name: Erik Kisner
- Contact:
Re: Block Size Question
Oh! I misunderstood then.
I was under the impression that if the job storage optimization settings were set to Local target (16 TB + backup files) it would read and de-dup in 8k chunks.
If Veeam is indeed reading at LUN block size, then that answers my question. Thank you Luca.
I was under the impression that if the job storage optimization settings were set to Local target (16 TB + backup files) it would read and de-dup in 8k chunks.
If Veeam is indeed reading at LUN block size, then that answers my question. Thank you Luca.
-
- Technology Partner
- Posts: 25
- Liked: 2 times
- Joined: May 11, 2015 11:51 am
- Full Name: Patrick Huber
- Contact:
[MERGED] VEEAM Performance during Read I/O
Hello all,
does anybody know how many read i/O VEEAM produces on the "source" (Production) storage systems ?
I only found Information about Repository Performance and Sizing. There we have a nice Guide from Luca on this...
But what about Read I/O and Performance ?
Which mechanism decides what blocksizes are used to perform a read operation from the production storage. ?
Is this the same Blocksize i set up in the Backup Jobs for Loacal, LAN or WAN Target ?
Then additionally how many I/O can VEEAM pull out of the production storage during read operations ? i know i can set Storage Latency Controls. But this only refers to latency...
Maybe this depends on the Backup Mode (DirectSAN, LAN or Hot-add) ?
Simple example:
When i set up a Backup Job with Forward Incremental. Let us say we hav 10 VMs with 2 .vmdks each.
With 100 GB per .vmdk. So there is a source capacity of app. 2000 GB (2TB).
i will be happy for any information to this.
Thank you folks.
does anybody know how many read i/O VEEAM produces on the "source" (Production) storage systems ?
I only found Information about Repository Performance and Sizing. There we have a nice Guide from Luca on this...
But what about Read I/O and Performance ?
Which mechanism decides what blocksizes are used to perform a read operation from the production storage. ?
Is this the same Blocksize i set up in the Backup Jobs for Loacal, LAN or WAN Target ?
Then additionally how many I/O can VEEAM pull out of the production storage during read operations ? i know i can set Storage Latency Controls. But this only refers to latency...
Maybe this depends on the Backup Mode (DirectSAN, LAN or Hot-add) ?
Simple example:
When i set up a Backup Job with Forward Incremental. Let us say we hav 10 VMs with 2 .vmdks each.
With 100 GB per .vmdk. So there is a source capacity of app. 2000 GB (2TB).
i will be happy for any information to this.
Thank you folks.
VEEAM Enthusiast
Veeam certified Architect
Veeam certified Architect
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Block Size Question
Hi Patrick, you can get some details from the thread above.
-
- Service Provider
- Posts: 248
- Liked: 28 times
- Joined: Dec 14, 2015 8:20 pm
- Full Name: Mehmet Istanbullu
- Location: Türkiye
- Contact:
Re: Block Size Question
Hi Everyone
I want to ask write io size for target storage sizing. I create Disk Magic report but write io size change the throughput a lot.
For example if 32K write io selected, storage throughput is 300MiB
64K write io selected, storage throughput is 700MiB
If Veeam direct access this storage through SAN (ReFS or NTFS 64k formatted) what is the exact IO size?
Storage option (Local target)
I want to ask write io size for target storage sizing. I create Disk Magic report but write io size change the throughput a lot.
For example if 32K write io selected, storage throughput is 300MiB
64K write io selected, storage throughput is 700MiB
If Veeam direct access this storage through SAN (ReFS or NTFS 64k formatted) what is the exact IO size?
Storage option (Local target)
VMCA v12
-
- Chief Product Officer
- Posts: 31798
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Block Size Question
There's no exact I/O size as Veeam does not work with the storage directly. Instead, all I/O goes through the OS cache, which will do additional I/O grouping in order to do bigger writes, as it helps performance. But in the worst case scenario of a single block being updated in a backup file, I/O size for "Local target" option will be whatever 1MB of source data compresses too (or around 512KB on average, as typical compression is 2x).
-
- Service Provider
- Posts: 248
- Liked: 28 times
- Joined: Dec 14, 2015 8:20 pm
- Full Name: Mehmet Istanbullu
- Location: Türkiye
- Contact:
Re: Block Size Question
Thanks Anton.
Actually bigger io size is great. I'm afraid io sizes are down by the OS to 128k or 64k. Backup throughput performance is painful this scenario. Worst case scenario 512k is great
Actually bigger io size is great. I'm afraid io sizes are down by the OS to 128k or 64k. Backup throughput performance is painful this scenario. Worst case scenario 512k is great
VMCA v12
Who is online
Users browsing this forum: Bing [Bot], FrancWest, Google [Bot], HansMeiser, Maxim.Arkhangelskiy, Mircea Dragomir, Semrush [Bot] and 111 guests