Comprehensive data protection for all workloads
Post Reply
JRRW
Enthusiast
Posts: 76
Liked: 45 times
Joined: Dec 10, 2019 3:59 pm
Full Name: Ryan Walker
Contact:

All Flash Repo - Adjust block size?

Post by JRRW »

While deploying an all-flash Repository (custom, not a vendor provided array such as a Pure) I have been trying to dial in performance, and came to an interesting theory I'd like to put forth.

Generally speaking, 'local' repositories are configured with large block sizes upwards of 1MB - however SSDs often perform better with SMALLER but deeper queues of writes. The larger blocks are helpful on HDDs especially with random I/O - which is the opposite on an SSD, the random IOPS are a factor higher and smaller is also fine/better there too.

Therefore, would it not then be better to setup jobs heading for a flash repository to use smaller blocks?

Namely this came up while doing benchmark testing, and the suggested diskspd.exe -c25G -b512K -w100 -Sh -d600 D:\testfile.dat made me realize that 'single thread/queue' writes can suck horribly on Parity SSD arrays.

Thoughts?

Note: With QLC competing with 10k and even NLSAS in cost, I think it's naïve to keep pushing for 'cheap and deep' when LARGE repositories are requiring so many spindles to handle the random I/O that transformations take, that it becomes less cost effective to do NLSAS. I checked by literally pricing out something like a 60-drive NLSAS array was basically the same cost as a 24x7.68Tb QLC SSD array.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: All Flash Repo - Adjust block size?

Post by HannesK » 1 person likes this post

Hello,
chances are low, that you get a significant performance gain by changing the block size. As your system is home-grown, the only chance I see is that you test it with real data.

Normally there are controller settings, that allow to configure block size of the RAID stripes. 128-256KB is usually the value we recommend. No idea, whether your solution can set this value.

There are also small 4KB metadata updates. So it's not only "large" blocks.

So from my point of view: nobody except you can make a serious estimation your custom system on performance tweaks. We are not talking about 2x by changing block sizes... maybe 10-20% in reality, but I don't remember anyone who has ever had the patience to test it with real data (diskspd is nice to check, but it does not reflect real workload)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 122 guests