NetApp Backup Target, SMB or Blockstorage?

Availability for the Always-On Enterprise

NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby pirx » Mon Oct 10, 2016 1:13 pm

Hi,

we are currently planning the implementation of our new backup environment. We plan to use two NetApp FAS systems as backup and backup copy targets for ~1200 VMs. Is there a huge disadvantage in using the FAS systems as NAS/SMB targets? W2K16 with ReFS 2 is currently not possible and we need to finish implementation in the next 2 months.
pirx
Enthusiast
 
Posts: 69
Liked: 7 times
Joined: Sun Dec 20, 2015 6:24 pm

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby dellock6 » Tue Oct 11, 2016 7:08 am

Hi,
if the idea to leverage ReFS is in your plans, I'd suggest to go for block volumes. Regardless you will use volumes or SMB, you will have anyway to place a server in front of the storage as you will need for example the smb gateway in Veeam. You can plan about starting with 2012 R2, and then upgrade it to 2016 once VBR 9.5 is out. Then, if you have multiple volumes in the two arrays, you can plan to migrate them from NTFS to ReFS 3.0 and start using the new integration.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 4789
Liked: 1250 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby pirx » Wed Oct 12, 2016 10:12 am

Our initial plan was to use SMB for the NetApps. We use them not as block storage in any other place. So there must be good reasons for us to use block storage. But I cannot find a clear answer or recommendation for this.
pirx
Enthusiast
 
Posts: 69
Liked: 7 times
Joined: Sun Dec 20, 2015 6:24 pm

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby csinetops » Wed Oct 12, 2016 2:01 pm

We use Netapp FAS as our primary target with Block storage attached to VM's as big disks ( veeam repositories), it works really well. I tried using our FAS systems as CIFS servers a few years back because it looked easy and it didn't work well, not sure if SMB would be any better. The FAS systems we have just didn't have enough power to roll jobs up into synthetic fulls. Again, not sure if you will be using synthetic fulls.

We do use a NetApp AltaVault for our copy jobs which use SMB, but there is no synthetic creation going on, its set to copy the full's from the source repository. This works out just fine.

I'd say depending on your environment, if you are going to use synthetic vs active fulls etc, you'll need to set it up and test.
csinetops
Enthusiast
 
Posts: 91
Liked: 13 times
Joined: Fri Jun 06, 2014 2:45 pm
Full Name: csinetops

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby dellock6 » Wed Oct 12, 2016 3:15 pm

Thanks for chiming in. Just a note however, every synthetic operation is executed by the repository server, not the storage array. Even if you use SMB and leave gateway selection to automatic, one of the proxies is selected also as the gateway, and this is the machine doing the synthetic operations.
This said, I've always seen SMB (except maybe smb3 in latest windows editions) as a chatty and unefficient protocol, thus slower even when used over the same storage.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 4789
Liked: 1250 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby vClintWyckoff » Wed Oct 12, 2016 6:20 pm

Also if you use CIFS share method you mentioned above you will get zero benefit of the ReFS integration, so if you deem the benefits of ReFS with Veeam to be beneficial to your deployment then I would recommend using block. It is also worth noting that Windows Server 2016 just officially went GA for download https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016. Probably the best course of action would be to test in your environment with ReFS 3.1.
vClintWyckoff
Veeam Software
 
Posts: 283
Liked: 73 times
Joined: Sat Oct 27, 2012 1:22 am
Location: Technical Evangelist
Full Name: Clint Wyckoff

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby csinetops » Wed Oct 12, 2016 6:32 pm

dellock6 wrote:Thanks for chiming in. Just a note however, every synthetic operation is executed by the repository server, not the storage array. Even if you use SMB and leave gateway selection to automatic, one of the proxies is selected also as the gateway, and this is the machine doing the synthetic operations.
This said, I've always seen SMB (except maybe smb3 in latest windows editions) as a chatty and unefficient protocol, thus slower even when used over the same storage.


That makes sense, the CIFS shares in our case were running over 1GB while block was backend by 8GB FC, I'm 99% the 1GB CIFS was our bottleneck and caused our issues. I'm 100% with you, I'd pick block over SMB any day.
csinetops
Enthusiast
 
Posts: 91
Liked: 13 times
Joined: Fri Jun 06, 2014 2:45 pm
Full Name: csinetops

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby sandsturm » Thu Oct 13, 2016 6:15 pm

Hi all
Has anyone experience with ReFS LUN's hosted on a Netapp FAS or AFF as iSCSI or FC LUN's mounted into a Windows guest system which is used as VEEAM Repository server?
I'm not sure if A ReFS formated LUN performs well on the underlying WAFL from the Netapp system?

Thx a lot
sandsturm
sandsturm
Influencer
 
Posts: 20
Liked: never
Joined: Mon Mar 23, 2015 8:30 am

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby pirx » Sat Oct 15, 2016 9:05 am

We have decided to start with SMB shares and see how this works because ReFS 3 and W2K16 is nothing that we can use in production in the near future.
pirx
Enthusiast
 
Posts: 69
Liked: 7 times
Joined: Sun Dec 20, 2015 6:24 pm

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby dellock6 » Sat Oct 15, 2016 12:21 pm

Also, remember that our advanced integration with ReFS volumes coming in VBR 9.5 will work if the volume is exposed by a windows server 2016 machine, refs emulation done by third party may not have this feature reproduced in their code.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 4789
Liked: 1250 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby gairys » Tue Oct 18, 2016 11:39 am

The main selling point of using SMB 3.0 would be at the application layer, such as a backend for Hyper-V where the servers can utilize RDMA. In our environment, we utilize NetApp LUNs in order to provide CSV storage to our HV clusters, which use ReFS. We are still using Veeam B&R. As for our backups, we have multiple Server 2012 R2 HAFS clusters (one at each datacenter) with JBODs attached and Storage Spaces running. By using the commodity disks, we were able to maintain three copies of our VMs at multiple sites for the same price as a FAS system.

We ran production loads (Veeam) using both the CIFS and Windows Server options, and the Windows Server option was the optimal choice in our environment. During all of our backup and copy jobs, the target was always the bottleneck when using CIFS, and we are a hybrid shop with roughly 100 HV VMs and 400 VMware VMs on a 10GB network and FCoE storage on the NetApp.

With that being said, when you utilize CIFS/SMB directly off the NetApp, you are presenting network storage to the Veeam servers over the CIFS/SMB protocol, but you are still using WAFL on the NetApp volume. The only way to utilize ReFS on the NetApp is to use block storage (FC/FCoE/iSCSI) if that is a requirement.
gairys
Novice
 
Posts: 8
Liked: 2 times
Joined: Mon Nov 24, 2014 4:27 pm
Full Name: Gairy S

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby vClintWyckoff » Tue Oct 18, 2016 1:10 pm

Good explanation Gairys, thanks for sharing with the community.
vClintWyckoff
Veeam Software
 
Posts: 283
Liked: 73 times
Joined: Sat Oct 27, 2012 1:22 am
Location: Technical Evangelist
Full Name: Clint Wyckoff

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby sandsturm » Thu Oct 20, 2016 6:30 pm

Yes of course, a blockbased approach (iSCSI,FC) is required to be able to use ReFS on a NetApp system. My idea is to have a virtual machine as Veeam Proxy and Repository. This machine will have attached one iSCSI LUN presented from a Netapp system directly into the VM (iSCSI initiator in The VM, not via hypervisor). The LUN will be formatted as reFS Volume and used to store VEEAM backup files.
Will I have an improvement with such a configuration compared to a NetApp LUN (provided in the same manner) formatted as NTFS LUN (64k blocksize) with keeping in mind to be on an underlying WAFL filesystem?
sandsturm
Influencer
 
Posts: 20
Liked: never
Joined: Mon Mar 23, 2015 8:30 am

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby pirx » Thu Nov 03, 2016 8:31 am

I did some early tests with our new FAS8060 and 2 SMB3 shares. The proxy is a DL380G8 server with 2 x 10 GbE interfaces not LACP, just W2K12R2 dynamic teaming. Not much tuning, not sure if everything on the Netpp side is already finished. I think the NetApp has >100 6 TB SATA drives.

Not sure what I should think about the numbers. I know that Reverse can be much slower than the other methods, but the numbers don't look very promising I think.

Forever Forward, NetApp dedup enabled:

Code: Select all
forward: (groupid=0, jobs=2): err= 0: pid=11192: Wed Nov 2 17:09:18 2016
  mixed: io=102400MB, bw=433999KB/s, iops=847, runt=241608msec
    slat (usec): min=35, max=4488, avg=794.01, stdev=613.49
    clat (usec): min=0, max=31108, avg=1533.50, stdev=752.62
     lat (usec): min=1227, max=32266, avg=2327.50, stdev=612.07
    clat percentiles (usec):
     |  1.00th=[   50],  5.00th=[  708], 10.00th=[  844], 20.00th=[  988],
     | 30.00th=[ 1080], 40.00th=[ 1176], 50.00th=[ 1304], 60.00th=[ 1528],
     | 70.00th=[ 2008], 80.00th=[ 2160], 90.00th=[ 2352], 95.00th=[ 2576],
     | 99.00th=[ 3664], 99.50th=[ 4896], 99.90th=[ 6944], 99.95th=[ 8160],
     | 99.99th=[11456]
    lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.32%, 50=0.66%
    lat (usec) : 100=0.46%, 250=0.44%, 500=0.39%, 750=3.91%, 1000=15.12%
    lat (msec) : 2=48.63%, 4=29.23%, 10=0.79%, 20=0.02%, 50=0.01%
  cpu          : usr=0.42%, sys=31.02%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=204800/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
transform: (groupid=1, jobs=2): err= 0: pid=17820: Wed Nov 2 17:09:18 2016
  mixed: io=85101MB, bw=145238KB/s, iops=283, runt=600002msec
    slat (usec): min=32, max=6127, avg=319.25, stdev=310.60
    clat (usec): min=18, max=318452, avg=6728.74, stdev=8902.39
     lat (msec): min=1, max=318, avg= 7.05, stdev= 8.76
    clat percentiles (usec):
     |  1.00th=[  620],  5.00th=[  828], 10.00th=[  916], 20.00th=[ 1012],
     | 30.00th=[ 1144], 40.00th=[ 1400], 50.00th=[ 1704], 60.00th=[ 7136],
     | 70.00th=[10432], 80.00th=[11968], 90.00th=[16064], 95.00th=[20352],
     | 99.00th=[36096], 99.50th=[48384], 99.90th=[84480], 99.95th=[92672],
     | 99.99th=[148480]
    lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.37%
    lat (usec) : 750=2.30%, 1000=16.34%
    lat (msec) : 2=35.15%, 4=3.96%, 10=9.84%, 20=26.78%, 50=4.77%
    lat (msec) : 100=0.43%, 250=0.03%, 500=0.01%
  cpu          : usr=0.00%, sys=4.08%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=170201/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  MIXED: io=102400MB, aggrb=433998KB/s, minb=433998KB/s, maxb=433998KB/s, mint=241608msec, maxt=241608msec

Run status group 1 (all jobs):
  MIXED: io=85101MB, aggrb=145237KB/s, minb=145237KB/s, maxb=145237KB/s, mint=600002msec, maxt=600002msec


Forever Forward, NetApp dedup disabled:

Code: Select all
forward: (g=0): rw=write, bs=512K-512K/512K-512K/512K-512K, ioengine=windowsaio, iodepth=1
...
transform: (g=1): rw=randrw, bs=512K-512K/512K-512K/512K-512K, ioengine=windowsaio, iodepth=1
...
fio-2.15
Starting 4 threads
forward: Laying out IO file(s) (1 file(s) / 51200MB)
transform: Laying out IO file(s) (2 file(s) / 102400MB)
Jobs: 2 (f=4): [_(2),m(2)] [65.9% done] [109.7MB/0KB/0KB /s] [219/0/0 iops] [eta 06m:59s]
forward: (groupid=0, jobs=2): err= 0: pid=16548: Wed Nov 2 17:33:12 2016
  mixed: io=102400MB, bw=497611KB/s, iops=971, runt=210722msec
    slat (usec): min=28, max=5459, avg=467.18, stdev=346.42
    clat (usec): min=18, max=504842, avg=1575.11, stdev=3260.65
     lat (msec): min=1, max=505, avg= 2.04, stdev= 3.25
    clat percentiles (usec):
     |  1.00th=[  628],  5.00th=[  908], 10.00th=[ 1020], 20.00th=[ 1128],
     | 30.00th=[ 1224], 40.00th=[ 1304], 50.00th=[ 1416], 60.00th=[ 1560],
     | 70.00th=[ 1736], 80.00th=[ 1928], 90.00th=[ 2224], 95.00th=[ 2480],
     | 99.00th=[ 3376], 99.50th=[ 4256], 99.90th=[ 6560], 99.95th=[ 7264],
     | 99.99th=[12864]
    lat (usec) : 20=0.01%, 50=0.06%, 100=0.03%, 250=0.04%, 500=0.28%
    lat (usec) : 750=1.62%, 1000=6.80%
    lat (msec) : 2=74.38%, 4=16.19%, 10=0.55%, 20=0.01%, 500=0.01%
    lat (msec) : 750=0.01%
  cpu          : usr=0.48%, sys=21.97%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=204800/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1
transform: (groupid=1, jobs=2): err= 0: pid=19656: Wed Nov 2 17:33:12 2016
  mixed: io=84144MB, bw=143602KB/s, iops=280, runt=600017msec
    slat (usec): min=32, max=4855, avg=271.68, stdev=266.97
    clat (usec): min=19, max=335026, avg=6856.61, stdev=9855.18
     lat (msec): min=1, max=335, avg= 7.13, stdev= 9.75
    clat percentiles (usec):
     |  1.00th=[  660],  5.00th=[  860], 10.00th=[  948], 20.00th=[ 1064],
     | 30.00th=[ 1272], 40.00th=[ 1528], 50.00th=[ 1848], 60.00th=[ 7264],
     | 70.00th=[10432], 80.00th=[12096], 90.00th=[15936], 95.00th=[20096],
     | 99.00th=[37632], 99.50th=[53504], 99.90th=[86528], 99.95th=[105984],
     | 99.99th=[305152]
    lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.30%
    lat (usec) : 750=1.65%, 1000=13.46%
    lat (msec) : 2=37.01%, 4=5.62%, 10=9.88%, 20=27.05%, 50=4.45%
    lat (msec) : 100=0.51%, 250=0.04%, 500=0.02%
  cpu          : usr=0.00%, sys=3.50%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=168288/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  MIXED: io=102400MB, aggrb=497611KB/s, minb=497611KB/s, maxb=497611KB/s, mint=210722msec, maxt=210722msec

Run status group 1 (all jobs):
  MIXED: io=84144MB, aggrb=143601KB/s, minb=143601KB/s, maxb=143601KB/s, mint=600017msec, maxt=600017msec


Reverse, NetApp dedup enabled:

Code: Select all
reversed: (g=0): rw=randrw, bs=512K-512K/512K-512K/512K-512K, ioengine=windowsaio, iodepth=1
...
fio-2.15
Starting 2 threads
reversed: Laying out IO file(s) (2 file(s) / 102400MB)
Jobs: 1 (f=2): [_(1),m(1)] [100.0% done] [99.31MB/0KB/0KB /s] [198/0/0 iops] [eta 00m:00s]
reversed: (groupid=0, jobs=2): err= 0: pid=11628: Wed Nov 2 18:08:33 2016
  mixed: io=204800MB, bw=185361KB/s, iops=362, runt=1131386msec
    slat (usec): min=31, max=7259, avg=415.27, stdev=353.77
    clat (usec): min=18, max=316195, avg=5099.12, stdev=8254.78
     lat (msec): min=1, max=316, avg= 5.51, stdev= 8.11
    clat percentiles (usec):
     |  1.00th=[  572],  5.00th=[  764], 10.00th=[  828], 20.00th=[  988],
     | 30.00th=[ 1128], 40.00th=[ 1288], 50.00th=[ 1464], 60.00th=[ 1720],
     | 70.00th=[ 2320], 80.00th=[10432], 90.00th=[14144], 95.00th=[19328],
     | 99.00th=[35072], 99.50th=[45824], 99.90th=[82432], 99.95th=[94720],
     | 99.99th=[142336]
    lat (usec) : 20=0.01%, 50=0.04%, 100=0.01%, 250=0.02%, 500=0.45%
    lat (usec) : 750=3.99%, 1000=16.24%
    lat (msec) : 2=45.33%, 4=7.00%, 10=5.80%, 20=16.56%, 50=4.16%
    lat (msec) : 100=0.36%, 250=0.04%, 500=0.01%
  cpu          : usr=0.22%, sys=7.21%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=409600/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  MIXED: io=204800MB, aggrb=185361KB/s, minb=185361KB/s, maxb=185361KB/s, mint=1131386msec, maxt=1131386msec



Reverse, NetApp Dedup disabled:

Code: Select all
reversed: (g=0): rw=randrw, bs=512K-512K/512K-512K/512K-512K, ioengine=windowsaio, iodepth=1
...
fio-2.15
Starting 2 threads
reversed: Laying out IO file(s) (2 file(s) / 102400MB)
Jobs: 1 (f=2): [_(1),m(1)] [100.0% done] [107.0MB/0KB/0KB /s] [214/0/0 iops] [eta 00m:00s]
reversed: (groupid=0, jobs=2): err= 0: pid=13744: Wed Nov 2 18:40:14 2016
  mixed: io=204800MB, bw=185332KB/s, iops=361, runt=1131565msec
    slat (usec): min=40, max=6326, avg=448.71, stdev=378.92
    clat (usec): min=19, max=326650, avg=5053.91, stdev=8384.07
     lat (msec): min=1, max=326, avg= 5.50, stdev= 8.22
    clat percentiles (usec):
     |  1.00th=[  540],  5.00th=[  724], 10.00th=[  788], 20.00th=[  892],
     | 30.00th=[ 1032], 40.00th=[ 1176], 50.00th=[ 1384], 60.00th=[ 1656],
     | 70.00th=[ 2256], 80.00th=[10304], 90.00th=[14016], 95.00th=[19328],
     | 99.00th=[35584], 99.50th=[46336], 99.90th=[84480], 99.95th=[91648],
     | 99.99th=[140288]
    lat (usec) : 20=0.01%, 50=0.06%, 100=0.01%, 250=0.03%, 500=0.60%
    lat (usec) : 750=5.70%, 1000=20.66%
    lat (msec) : 2=39.73%, 4=6.32%, 10=5.87%, 20=16.50%, 50=4.10%
    lat (msec) : 100=0.38%, 250=0.04%, 500=0.01%
  cpu          : usr=0.22%, sys=7.72%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=409600/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  MIXED: io=204800MB, aggrb=185331KB/s, minb=185331KB/s, maxb=185331KB/s, mint=1131565msec, maxt=1131565msec
pirx
Enthusiast
 
Posts: 69
Liked: 7 times
Joined: Sun Dec 20, 2015 6:24 pm

Re: NetApp Backup Target, SMB or Blockstorage?

Veeam Logoby Didi7 » Thu Nov 03, 2016 9:01 am

Really? Doesn't surprise me!

Unfortunately, we have different NetApp systems in use, also old ones like the FAS2040, mid-range NetApp's and FAS8xxx series as well in a datacenter. We use them as source storage under vSphere and performance over NBD is always limited to around 100MB/s, on the FAS2040 even slower transfer rates.

We still have HP MSA2000 G3 generation storage from 2009 in one environment and I get far better performance from this old storage device, than from the 2011 FAS2040 storage from NetApp. EMC CX4, MSA60 and MSA2040 even faster as well.

In our datacenter, all vSphere hosts in our cluster connected to a FAS8xxx series storage with 10GBit/s NFS and 10GBit/s vSwitch for the system console, traffic is limited to around 80MB/s, though NBD should be able to provide 300-400MB/s theoretically, as traffic is limited to around 30-40% over NBD. These poor performance results are with Veeam VBR and Veritas NetBackup.

I have found so many threads here regarding NetApp storage system used in VBR as a source or target device and the performance reports are always bad.

A couple of months ago, I talked to a NetApp System Engineer and he told me, that traffic over one thread is limited to 100MB/s on a FAS2040, but I suppose this is also the limit on more recent FAS NetApp storage series.

Using parallel processing in VBR doesn't help much.

Using EMC or HP storage delivers far more transfer speed compared to NetApp.

I still wonder, why vStorage API used in VBR and other products limits traffic that much on NetApp's, even though more than 1 VMDK is transferred at once. Performance can only be lifted slightly using different backup tasks in VBR.

In my opinion, if you have the chance to use other storage than NetApp as source or target device (is always the bottleneck in VBR), then switch to other vendors. I can not recommend NetApp, unfortunately I am forced to use it in some environments here.

In all those different threads, I always wondered why Veeam (which works closely with storage vendors to enable Storage Snapshots in VBR) never took position to explain the poor performance using NetApp devices with VBR.

Some users even switched to direct SAN access with 10GBit/s SAN HBA's, but performance was still limited to 100MB/s. Really, I can't uderstand why other vendors significantly exceed those transfer rates. Some users here talk about nearly 1000MB/s transfer rates over SAN.

Regards,
Didi7
Didi7
Expert
 
Posts: 201
Liked: 15 times
Joined: Fri Oct 17, 2014 8:09 am

Next

Return to Veeam Backup & Replication



Who is online

Users browsing this forum: dellock6, MSNbot Media, tdewin and 62 guests