Comprehensive data protection for all workloads
Post Reply
gtaylor08
Novice
Posts: 4
Liked: never
Joined: Mar 24, 2018 6:41 pm
Full Name: Greg Taylor
Contact:

Backup Optimization: Storage, block sizes, RAID, backups...

Post by gtaylor08 »

I am on a quest to speed up our backup times without changing our backup targets (we just got the Synology targets last year).... Below is a summary thus far... I definitely need more eyes on this as I am killing myself and my wife has become a widow due to the amount of time I am spending on this.... feel free to review and provide feedback.

Performance results thus far (Morganton is our control site... no changes here):

Some observations based on the latest backup data (block size not considered at this point... more on that below):
•The largest improvement to date was changing from Reverse Incremental to Forward w/ weekly Synthetics (We knew this would happen)
•Throughput and processing rate overall do not appear to be affected much by the RAID level, LUN (Thin vs. Thick), and bond vs. MPIO, and JumboFrame on the Synology... however...
•Directly comparing Morganton (MOR1) to Chicago (CHI1) it appears MPIO + JumboFrame + RAID 10 + Thin is performing (Processing rate and Throughput) better most of the time, however there are some anomalies...
•Directly comparing Morganton (MOR1) to Forest Grove (VIN2) it appears JumboFrame improves processing rate and throughput. Note: These facilities have very similar data types.
•Directly comparing Morganton (MOR1) to Perrysburg (PER1) it appears JumboFrame improves processing rate and Thick Provision volume increases throughput, but throughput is not quite as good (possibly due to the fact that we are sharing the iSCSI switch with the Equallogic here)
•Directly comparing Morganton (MOR1) to Berlin (BER1) it appears Thick Provision volume increases both processing rate and throughput (anomaly on 3-21 caused by back end storage changes).
•Based on the observations thus far the biggest improvement aside from backup type change is JumboFrame.
•Vineland (VIN1) is excluded from the above observations because the local configuration is dramatically different then all other sites, however, below is where we start to account for block sizes... keep reading....

VMWare:

VMFS-6 uses 1 MB block size, but actual block size of the VM depends (FYI we mainly use thin provision disks):

From VMWare white paper:

•VMFS-6 introduces two new block sizes, referred to as small file block (SFB) and large file block (LFB).
•While the SFB size can range from 64 KB to 1 MB for future use cases.
•VMFS-6 in vSphere 6.5 is utilizing an SFB size of 1 MB only. The LFB size is set to 512 MB.
•Thin disks created on VMFS-6 are initially backed with SFBs.
•Thick disks created on VMFS-6 are allocated LFBs as much as possible.
•For the portion of the thick disk which does not fit into an LFB, SFBs are allocated.

Production Storage (Currently Equallogic)
•Default RAID stripe size is 64 KB (We are using this)
•Default sector size is 512 B (We are using this)

Backup Targets and Proxies:

Note: All backup targets are Synology arrays connected VIA iSCSI (in host) to a Windows VM except at HQ where we are using a physical proxy, still connecting to the Synology VIA iSCSI.

So the 30 day Synology Array has a fresh volume on it... I have not touched the advanced LUN settings yet because you can only do it once...

I believe we settled on 64 KB for optimal backups, but when I formatted inside of Windows I didn't specify the block size... default NTFS is (4096 KB)...

VIN1_30_Days (RS815RP+): Windows allocation unit = 4096, Synology at default currently 8K, options range from 4K to 64K

USVIN1-DR (RS3617xs+ with 1 RX1217rp expansion unit - total 24 disks across 2 RAID arrays): Windows allocation unit = 32K, Synology at default currently 4K
NOTE ADVANCED LUN SETTINGS BELOW ARE CURRENTLY HOW ALL OTHER SYNOLOGY ARRAYS ARE SET

Veeam:

Storage settings for ALL backup jobs: All data reduction options checked, compression is set to optimal, storage optimization set to local target

Backup Types: Forward incremental w/ Synthetics on Sundays

Storage Optimization:

Options:
•Local Target (16 TB + backup files): 4096 KB data blocks
•Local Target: 1024 KB data blocks <---- we are using this see below
•LAN Target: 512 KB data blocks
•WAN Target: 256 KB data blocks

Note: Backup Copy jobs are using the local backups as a source so I believe they inherit the same settings as below with the exception of compression level (set to optimal, may want to change this to enhance Silver Peak de-dupe....)
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by foggy » 1 person likes this post

Hi Greg, in case you're after improving your jobs performance, the first place to look at is the job bottleneck stats.
gtaylor08
Novice
Posts: 4
Liked: never
Joined: Mar 24, 2018 6:41 pm
Full Name: Greg Taylor
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by gtaylor08 »

Thanks!
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by tsightler »

You posted a lot of information but, unless I'm missing it, I didn't see any mentioned regarding what throughput you are seeing today and what operations are slow. Also, nothing about your use of per-VM chains vs full job chains, or anything else that would really help us determine if you have things setup in an optimal way. Also, as mentioned above, no bottleneck stats. Are you trying to improve full backup speeds, or incremental speeds, or both? What type of proxy are you using?

Overall, everything you state seems to be configured in a reasonable way, I wouldn't go tweaking on block sizes or other things at this point, and jumbo frames biggest benefit is lowering overall CPU usage for the iSCSI initiator and the storage system, which can lead to better peak throughput.
gtaylor08
Novice
Posts: 4
Liked: never
Joined: Mar 24, 2018 6:41 pm
Full Name: Greg Taylor
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by gtaylor08 »

I actually have a spreadsheet detailing what you are asking for, but I can not post it to here, I could email it to you though if you are interested in helping me more.
gtaylor08
Novice
Posts: 4
Liked: never
Joined: Mar 24, 2018 6:41 pm
Full Name: Greg Taylor
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by gtaylor08 »

Just to answer a few questions on here... we use full job chains, bottle neck at this point on all backups except copy jobs is source. All proxies except the one at our primary DC are hotadd, at our primary DC we are using a DirectSAN physical server and HotAdd backup. Just looking to overall improve backup speeds which overall have been improved just by moving to forward inc. w/ weekly synthetics. The weekly synthetics still run longer than I would like.

Here is last nights Info:

Backup JobBackup TypeTransferredBackup TimeProxy TypeProcessing RateThroughputBottleneckRAID TypeLUNBlock SizeNetworkConfig Settings
BER1 30 DaysIncremental6.2 GB13:47HotAdd131 MB/s162.1 MB/sSource5File-Level/Thin4KBond, no JumboFrameTOR: No iSCSI, No JumboFrame
CHI1 30 DaysIncremental9.6 GB15:11HotAdd165 MB/s238 MB/sSource10File-Level/Thin4KMPIO, JumboFrameTOR: No iSCSI, JumboFrame
MOR1 30 DaysIncremental22.1 GB19:09HotAdd125 MB/s212.4 MB/sSource5File-Level/Thick4KBond, no JumboFrameTOR: No iSCSI, No JumboFrame
PER1 30 DaysIncremental2.3 GB10:30HotAdd175 MB/s87.4 MB/sSource5File-Level/Thin4KBond, JumboFrameISCSI: iSCSI, JumboFrame
VIN1 30 DaysIncremental98.6 GB36:04SAN/HotAdd114 MB/s398.3 MB/sSource10File-Level/Thick8KMPIO (30 day - JumboFrame), MPIO on EQL sideDirect Attached to Server
VIN2 30 DaysIncremental17.8 GB13:25HotAdd166 MB/s347 MB/sSource5File-Level/Thick4KBond, JumboFrameTOR: No iSCSI, JumboFrame
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by foggy »

gtaylor08 wrote:we use full job chains, bottle neck at this point on all backups except copy jobs is source.
This means the source storage cannot provide data any faster.
gtaylor08 wrote:The weekly synthetics still run longer than I would like.
Weekly synthetic is the other story, enabling per-VM backup chains will allow to perform synthetic activity for different VMs in parallel. But keep in mind that it is very I/O intensive, so depends on the target storage capabilities.
ITP-Stan
Service Provider
Posts: 201
Liked: 55 times
Joined: Feb 18, 2013 10:45 am
Full Name: Stan (IF-IT4U)
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by ITP-Stan »

Using ReFS 64K instead of NTFS will have a big benefit for weekly synthetics and also on merging incrementals.

If you want to test, you have to start a new job/chain.
yasuda
Enthusiast
Posts: 64
Liked: 10 times
Joined: May 15, 2014 3:29 pm
Full Name: Peter Yasuda
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by yasuda »

ITP-Stan wrote:Using ReFS 64K instead of NTFS will have a big benefit for weekly synthetics and also on merging incrementals.
Is block cloning okay to use on iSCSI drives on Synology? MS at one time said it's only "supported" on bare drives, but I have not kept up.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by foggy »

yasuda wrote:Is block cloning okay to use on iSCSI drives on Synology? MS at one time said it's only "supported" on bare drives, but I have not kept up.
Here's more on that.
yasuda
Enthusiast
Posts: 64
Liked: 10 times
Joined: May 15, 2014 3:29 pm
Full Name: Peter Yasuda
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by yasuda »

Thanks! The TL;DR is, as of this post 2/26/2018:

by jja » Mon Feb 26, 2018 1:09 am
jja wrote:We have gotten official information from Microsoft via a Microsoft partner that did an advisory case.
ReFS on iSCSI is NOT supported.

-Jannis
ITP-Stan
Service Provider
Posts: 201
Liked: 55 times
Joined: Feb 18, 2013 10:45 am
Full Name: Stan (IF-IT4U)
Contact:

Re: Backup Optimization: Storage, block sizes, RAID, backups

Post by ITP-Stan » 1 person likes this post

This has just changed according to the weekly digest by Gostev.

ReFS on iSCSI LUN is now supported if the hardware is on the HCL.
Post Reply

Who is online

Users browsing this forum: No registered users and 230 guests