Discussions specific to the VMware vSphere hypervisor
Post Reply
Nils
Influencer
Posts: 11
Liked: 2 times
Joined: Jun 18, 2013 8:12 am
Full Name: Nils Petersen
Contact:

SAN read buffer size

Post by Nils » Dec 11, 2015 2:22 pm

When testing our SMB storage with large I/Os I found that vDisk read performance could be increased quite a bit (1 Gbit/s -> 3 Gbit/s with 32+ MB buffers, measured simply with dd inside a guest).

In order to decrease our backup window I was looking for increasing SAN read buffers for the backup proxy and all I could find was an ancient thread. The mentioned registry value VddkPreReadBufferSize doesn't seem to do anything.

Listening in to VeeamAgent.exe with ProcessMonitor revealed the value VmfsSanPreReadBufSize which does change VMFS SAN PREREAD BUFFER SIZE according to the log. Is this what I'm looking for? Is there any documentation for those registry options?

(In case you're interested: we're using a Dell MD3220i iSCSI array with 4x Gigabit to each host.)

Nils

Vitaliy S.
Product Manager
Posts: 23625
Liked: 1708 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: SAN read buffer size

Post by Vitaliy S. » Dec 11, 2015 5:46 pm

Nils, if you want to decrease your backup window, can you please tell us your current bottleneck stats for the backup job?

Nils
Influencer
Posts: 11
Liked: 2 times
Joined: Jun 18, 2013 8:12 am
Full Name: Nils Petersen
Contact:

Re: SAN read buffer size

Post by Nils » Dec 15, 2015 2:06 pm

In order to get a meaningful figure I deactivated CBT and ran an incremental job (VMfsSanPreReadBufSize=32MiB) - 3.8 TB read, 80 GB written. Load: Source 98% > Proxy 50% > Network 16% > Target 0%. Before improving the setup, the proxy used to be the bottleneck (the backup server's Nehalem QC). I'm now using a virtual proxy with 8 K10 cores and 4x1 Gb SAN access (backup server has 1xGb). With an active full backup the new setup promptly overwhelmed the Gb downlink to the repository but that may change in a few months. Generally, I'd like to know what our scenario can scale up to and wondered why these parameters are nowhere to be found.

I'll repeat with default VMfsSanPreReadBufSize (4 GiB) tonight and post the result.

rreed
Expert
Posts: 354
Liked: 72 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: SAN read buffer size

Post by rreed » Dec 15, 2015 3:52 pm

Nils, out of curiosity, what is your 1Gb switch that you overwhelmed please? Were you killing the output buffers?
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)

Nils
Influencer
Posts: 11
Liked: 2 times
Joined: Jun 18, 2013 8:12 am
Full Name: Nils Petersen
Contact:

Re: SAN read buffer size

Post by Nils » Dec 15, 2015 4:15 pm

The switch isn't the problem - if you read 4 Gbit/s and funnel the output through 1 Gbit/s the load is bound to overrun any amount of buffering you can have. I used "overwhelmed" in terms of our PCM throwing warnings for exceeding 98% thresholds. The connection itself runs fine and without problems. Note that's it's not even an iSCSI connection but the SAN proxy to repository link.

The backup server/repository is located ~2 km from the SAN and currently we've only got a single 1000BASE-LX link running for everything. (By the way, VoIP w/ QoS on its own VLAN worked 100% fine while running at 99.5%+ during the backup; flow control is deliberately turned off on the link.)

rreed
Expert
Posts: 354
Liked: 72 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: SAN read buffer size

Post by rreed » Dec 15, 2015 4:35 pm

Exactly, and you should try funneling 10Gb through a single 1Gb connection. :wink: Very nicely done, Nils.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)

Nils
Influencer
Posts: 11
Liked: 2 times
Joined: Jun 18, 2013 8:12 am
Full Name: Nils Petersen
Contact:

Re: SAN read buffer size

Post by Nils » Dec 16, 2015 3:02 pm

You've got to take what you get... :wink:

Running the incremental job with CBT turned off and 4 Mib default prebuffer: Load: Source 98% > Proxy 51% > Network 15% > Target 0% (3.8 TB read, 60.6 GB written) - apparently the preread buffer setting has no impact on the actual speed; I'll stick to the default size then.

Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot], rainikotobary and 48 guests