After I wasn't able to find the clear cause of the issue I contacted the support, to help analyse it and/or point to the cause. Unfortunately this wasn't very helpful so far. To be honest, I'm pretty unsatisfied with the whole case (# 03666072). So I was hoping to get more information here, from people with similar experiences.
Tape Proxy:
Dell R620
2x E5-2630
192 GB RAM
8 Gb/s Fibre Channel
Tape Library:
Quantum Scalar i3
IBM Ultrium 8 HH
8 Gb/s Fibre Channel
Backup Storage:
NetApp E2860
20 x 4 TB (7200 rpm)
8 Gb/s Fibre Channel
Windows Server 2016
30 TB ReFS 64k repository
I'm using LTO 7 tapes formatted as type M. So I would expect a throughput of 300 MB/s, when the target is the bottleneck.
What I'm getting is an average throughput of 150 MB/s and source as bottleneck (~ 87%)
In contrary, there is one File to Tape job that backs up files from a NTFS partition that resides on the same storage system, which is always fast (300 MB/s, bottleneck: target).
During the case I was asked to perform a benchmark on the partition with the source jobs, which looks OK:
Code: Select all
>diskspd.exe -c1G -b512K -w0 -r4K -Sh -d600 H:\testfile.dat
Command Line: diskspd.exe -c1G -b512K -w0 -r4K -Sh -d600 H:\testfile.dat
Input parameters:
timespan: 1
-------------
duration: 600s
warm up time: 5s
cool down time: 0s
random seed: 0
path: 'H:\testfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing read test
block size: 524288
using random I/O (alignment: 4096)
number of outstanding I/O operations: 2
thread stride size: 0
threads per file: 1
using I/O Completion Ports
IO priority: normal
System information:
computer name: veeam-san
start time: 2019/08/05 13:22:37 UTC
Results for timespan 1:
*******************************************************************************
actual test time: 600.00s
thread count: 1
proc count: 24
CPU | Usage | User | Kernel | Idle
-------------------------------------------
0| 14.64%| 0.51%| 14.14%| 85.36%
1| 0.38%| 0.11%| 0.27%| 99.63%
2| 0.48%| 0.07%| 0.41%| 99.52%
3| 0.30%| 0.10%| 0.20%| 99.70%
4| 0.41%| 0.05%| 0.35%| 99.59%
5| 0.08%| 0.06%| 0.02%| 99.92%
6| 0.63%| 0.13%| 0.51%| 99.37%
7| 9.74%| 3.22%| 6.52%| 90.26%
8| 0.85%| 0.18%| 0.67%| 99.15%
9| 0.11%| 0.07%| 0.04%| 99.89%
10| 0.30%| 0.10%| 0.20%| 99.70%
11| 0.20%| 0.04%| 0.16%| 99.80%
12| 1.06%| 0.12%| 0.93%| 98.94%
13| 1.12%| 0.08%| 1.04%| 98.88%
14| 0.33%| 0.10%| 0.23%| 99.67%
15| 0.05%| 0.05%| 0.00%| 99.95%
16| 2.70%| 1.82%| 0.88%| 97.30%
17| 0.24%| 0.06%| 0.18%| 99.76%
18| 0.25%| 0.09%| 0.16%| 99.75%
19| 0.07%| 0.06%| 0.01%| 99.93%
20| 0.07%| 0.05%| 0.02%| 99.93%
21| 0.04%| 0.04%| 0.00%| 99.96%
22| 0.05%| 0.03%| 0.02%| 99.95%
23| 0.23%| 0.04%| 0.19%| 99.77%
-------------------------------------------
avg.| 1.43%| 0.30%| 1.13%| 98.57%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 350717739008 | 668941 | 557.45 | 1114.90 | H:\testfile.dat (1024MiB)
------------------------------------------------------------------------------
total: 350717739008 | 668941 | 557.45 | 1114.90
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 350717739008 | 668941 | 557.45 | 1114.90 | H:\testfile.dat (1024MiB)
------------------------------------------------------------------------------
total: 350717739008 | 668941 | 557.45 | 1114.90
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | file
------------------------------------------------------------------------------
0 | 0 | 0 | 0.00 | 0.00 | H:\testfile.dat (1024MiB)
------------------------------------------------------------------------------
total: 0 | 0 | 0.00 | 0.00
- Is fragmentation the cause of poor performance in my case?
- How am I able to confirm this?
- What can be done to increase performance, while staying on ReFS? Active fulls are not an option for me.
- Is the storage system not good enough, even with high fragmentation?
- Would more (7k) disks on the back end increase performance?
- What else could be the cause in my case?
Stephan