by gtaylor08 » Sat Mar 24, 2018 6:50 pm people like this post
I am on a quest to speed up our backup times without changing our backup targets (we just got the Synology targets last year).... Below is a summary thus far... I definitely need more eyes on this as I am killing myself and my wife has become a widow due to the amount of time I am spending on this.... feel free to review and provide feedback.
Performance results thus far (Morganton is our control site... no changes here):
Some observations based on the latest backup data (block size not considered at this point... more on that below):
•The largest improvement to date was changing from Reverse Incremental to Forward w/ weekly Synthetics (We knew this would happen)
•Throughput and processing rate overall do not appear to be affected much by the RAID level, LUN (Thin vs. Thick), and bond vs. MPIO, and JumboFrame on the Synology... however...
•Directly comparing Morganton (MOR1) to Chicago (CHI1) it appears MPIO + JumboFrame + RAID 10 + Thin is performing (Processing rate and Throughput) better most of the time, however there are some anomalies...
•Directly comparing Morganton (MOR1) to Forest Grove (VIN2) it appears JumboFrame improves processing rate and throughput. Note: These facilities have very similar data types.
•Directly comparing Morganton (MOR1) to Perrysburg (PER1) it appears JumboFrame improves processing rate and Thick Provision volume increases throughput, but throughput is not quite as good (possibly due to the fact that we are sharing the iSCSI switch with the Equallogic here)
•Directly comparing Morganton (MOR1) to Berlin (BER1) it appears Thick Provision volume increases both processing rate and throughput (anomaly on 3-21 caused by back end storage changes).
•Based on the observations thus far the biggest improvement aside from backup type change is JumboFrame.
•Vineland (VIN1) is excluded from the above observations because the local configuration is dramatically different then all other sites, however, below is where we start to account for block sizes... keep reading....
VMWare:
VMFS-6 uses 1 MB block size, but actual block size of the VM depends (FYI we mainly use thin provision disks):
From VMWare white paper:
•VMFS-6 introduces two new block sizes, referred to as small file block (SFB) and large file block (LFB).
•While the SFB size can range from 64 KB to 1 MB for future use cases.
•VMFS-6 in vSphere 6.5 is utilizing an SFB size of 1 MB only. The LFB size is set to 512 MB.
•Thin disks created on VMFS-6 are initially backed with SFBs.
•Thick disks created on VMFS-6 are allocated LFBs as much as possible.
•For the portion of the thick disk which does not fit into an LFB, SFBs are allocated.
Production Storage (Currently Equallogic)
•Default RAID stripe size is 64 KB (We are using this)
•Default sector size is 512 B (We are using this)
Backup Targets and Proxies:
Note: All backup targets are Synology arrays connected VIA iSCSI (in host) to a Windows VM except at HQ where we are using a physical proxy, still connecting to the Synology VIA iSCSI.
So the 30 day Synology Array has a fresh volume on it... I have not touched the advanced LUN settings yet because you can only do it once...
I believe we settled on 64 KB for optimal backups, but when I formatted inside of Windows I didn't specify the block size... default NTFS is (4096 KB)...
VIN1_30_Days (RS815RP+): Windows allocation unit = 4096, Synology at default currently 8K, options range from 4K to 64K
USVIN1-DR (RS3617xs+ with 1 RX1217rp expansion unit - total 24 disks across 2 RAID arrays): Windows allocation unit = 32K, Synology at default currently 4K
NOTE ADVANCED LUN SETTINGS BELOW ARE CURRENTLY HOW ALL OTHER SYNOLOGY ARRAYS ARE SET
Veeam:
Storage settings for ALL backup jobs: All data reduction options checked, compression is set to optimal, storage optimization set to local target
Backup Types: Forward incremental w/ Synthetics on Sundays
Storage Optimization:
Options:
•Local Target (16 TB + backup files): 4096 KB data blocks
•Local Target: 1024 KB data blocks <---- we are using this see below
•LAN Target: 512 KB data blocks
•WAN Target: 256 KB data blocks
Note: Backup Copy jobs are using the local backups as a source so I believe they inherit the same settings as below with the exception of compression level (set to optimal, may want to change this to enhance Silver Peak de-dupe....)