VBR 9.5 - REFS

Availability for the Always-On Enterprise

VBR 9.5 - REFS

Veeam Logoby ksl28 » Mon Oct 31, 2016 7:45 am

Hello,

We are planning on establising a new VBR Disaster backup solution, and will use VBR 9.5 when release (hopefully within days).

The setup will be as follows:
1 All in one VBR 9.5 Server (Proxy, STG Repository, etc...) - Will be phsysical
1 iSCSI based Synology server, providing the storage to the VBR Server.

The VBR server will be in the same Layer2 network, as the Hyper-V and VMware servers - so there shouldnt be any firewall / ACLs bottlenecks.

Ive read that using Veeam with backup repositorys based on REFS, should increase the performance of the backup and restore greatly!
veeam-backup-replication-f2/question-re-refs-3-speed-benefits-and-raid6-t38331.html#p213647

If we want to get the benefits from ReFS on Windows Server 2016, will we then need to format the iSCSI disk as ReFS - or what do we need to do, to accomplish this?


Also another minor question, that im sure there is an easy answer to, but i havent been able to find it.
When installing a VBR server in a VMware enviroment, i believe that Veeam uses the HotADD function in VMware to speed up the process.
If we install the VBR server as a phsysical server, how will the VBR server then be able to HotADD the disk - or is this not relevant?

I hope some will take the time, to respond to the questions, and maybe provide some best practices and such :)

Thanks in advance!
ksl28
Novice
 
Posts: 5
Liked: never
Joined: Wed Sep 21, 2016 8:31 am
Full Name: Kristian Leth

Re: VBR 9.5 - REFS

Veeam Logoby PTide » Mon Oct 31, 2016 11:40 am

Hi,
If we want to get the benefits from ReFS on Windows Server 2016, will we then need to format the iSCSI disk as ReFS - or what do we need to do, to accomplish this?
Correct. You just mount the storage via iSCSI on the Win2016 server and format it as ReFS. I believe this link is also worth checking.

If we install the VBR server as a phsysical server, how will the VBR server then be able to HotADD the disk - or is this not relevant?
You need to have VM proxy on the host. Please check the requirements for details.

Thank you.
PTide
Veeam Software
 
Posts: 2860
Liked: 231 times
Joined: Tue May 19, 2015 1:46 pm

Re: VBR 9.5 - REFS

Veeam Logoby ksl28 » Mon Oct 31, 2016 1:23 pm

Hi,

Thank you so much for clearing this up for me :)
Ive already checked the first link, but it seemed to simple to be true - but i guess it dosent have to be complicated :)

The second link where very interresting, it gave me alot of new info relevant for this project :)

Just to clarify, we dont need a Proxy server for backing up Hyper-V machines - when can use our physical VBR Server for this - correct?
ksl28
Novice
 
Posts: 5
Liked: never
Joined: Wed Sep 21, 2016 8:31 am
Full Name: Kristian Leth

Re: VBR 9.5 - REFS

Veeam Logoby PTide » Mon Oct 31, 2016 3:03 pm

It's a little bit different story with proxies in a Hyper-V environment, there are two types - on-host proxy and off-host proxy. By default a Hyper-V host where your VMs reside at is used as a proxy, this is called "on-host proxy". You can assign a proxy role to another server, this is called "off-host proxy". Please check this article for more info.

Thanks
PTide
Veeam Software
 
Posts: 2860
Liked: 231 times
Joined: Tue May 19, 2015 1:46 pm

Re: VBR 9.5 - REFS

Veeam Logoby ksl28 » Tue Nov 01, 2016 7:07 am

Hello,

Thanks the article regarding on-host vs off-host proxy, it gave me alot of info :)

From what i understood from the article, Veeam does not recommend using a VM to act as a off-host proxy node.
Since we are running a fairly small Hyper-V setup (7 hosts), and only need to backup approx 5-10% of the VMs with Veeam, i then believe on-host proxy is the best solution - agree?

I can easily understand why Veeam dosent want an VM to be off-host proxy, but its simply way to pricy, to buy a new dedicated server to run the off-host proxy on - specially when we are talking about moving approx 5-10 GB of data each day :)

Thanks again for taking your time, to provide me with usefull information :)
ksl28
Novice
 
Posts: 5
Liked: never
Joined: Wed Sep 21, 2016 8:31 am
Full Name: Kristian Leth

Re: VBR 9.5 - REFS

Veeam Logoby PTide » Tue Nov 01, 2016 7:49 am

Not only the amount of data to be transferred matters, but also the number of jobs and virtual drives per job. Please check the system requirements in order to get a clear vision of how much resources you might need. Oh-host proxy should work fine for 5-10GB of daily data transfer until you have enough resources to process all the disks in the jobs.

Thanks.
PTide
Veeam Software
 
Posts: 2860
Liked: 231 times
Joined: Tue May 19, 2015 1:46 pm

Re: VBR 9.5 - REFS

Veeam Logoby ashleyw » Wed Nov 02, 2016 5:26 am

Hi, We are busy refreshing our backup infrastructure ready for 9.5.
We are running a single node hyperconverged unit running vmware with 23x3TB drives + 256GB SSD and 96GB ram.
We run our main Veeam controller and 4 proxies on there - the proxies pull form separate FC connected primary storage.
Currently we present storage to Veeam by CIFS from our ZFS (OmniOS) VM.
We have now stood up a windows 2016 server VM on the host and used iscsi (via OmniOS) on a loopback adaptor to present a 60TB block device to the Windows 2016 server which we can then represent as a REFS share. Initial IOMeter tests look very promising.
In the settings of 9.5, is it the backup repository that would need to be changed? (and then obviosuly the job mappings to repository).
Currently We only have options for MIcrosoft Windows Server, Linux Server, Shared Folder or Deduplicating storage appliance.
Will there be a separate option for REFS or are the REFS connections going to be done in a different way or detected through the "Microsoft windows Server" option?

thanks.
ashleyw
Service Provider
 
Posts: 137
Liked: 16 times
Joined: Thu Oct 28, 2010 10:55 pm
Full Name: Ashley Watson

Re: VBR 9.5 - REFS

Veeam Logoby Mike Resseler » Wed Nov 02, 2016 7:09 am 1 person likes this post

Ashley,

It will be Microsoft Windows Server or (I believe) even a Shared Folder. B&R will detect ReFS automatically and make sure that everything is being done as planned.

I haven't seen the latest documentation yet but if I am not mistaken, B&R will even automatically turn on integrity streams (which is important for detecting corruption but also it changes the behavior of ReFS (in a good way ;-)).

Hope it helps

Mike
Mike Resseler
Veeam Software
 
Posts: 2795
Liked: 343 times
Joined: Fri Feb 08, 2013 3:08 pm
Location: Belgium, the land of the fries, the beer, the chocolate and the diamonds...
Full Name: Mike Resseler

Re: VBR 9.5 - REFS

Veeam Logoby tdewin » Wed Nov 02, 2016 8:43 am

You will be able to check detection of ReFS by going to the advanced settings of the repository. If ReFS is detected, the "Align backup file data blocks" checkbox and text should be grayed out
tdewin
Veeam Software
 
Posts: 1001
Liked: 345 times
Joined: Fri Mar 02, 2012 1:40 pm
Full Name: Timothy Dewin

Re: VBR 9.5 - REFS

Veeam Logoby PTide » Wed Nov 02, 2016 10:59 am

Hi Ashley,

Did I get it right - you want to share ZFS LUN from a UNIX VM via iSCSI to a Windows VM running on the same host, format it as ReFS and share it to your Veeam VM (which is another VM) via CIFS and use it as a repo?
PTide
Veeam Software
 
Posts: 2860
Liked: 231 times
Joined: Tue May 19, 2015 1:46 pm

Re: VBR 9.5 - REFS

Veeam Logoby ashleyw » Wed Nov 02, 2016 10:55 pm 2 people like this post

well there are some problems with using Windows storage spaces as a backup target...
Windows does not allow for striping across multiple storage pools within the same OS instance - so the performance is limited to a single storage pool.
Also the concept of global hot spares for multiple storage pools is missing form windows storage spaces - the spares need to be dedicated to a storage pool.
ZFS allows 1 or more SSDs to be used as an l2arc cache on z zpool (ie read cache), and allows for striping across multiple vdevs (each of which is in effect a raid group).
So to overcome these limitations, what we have is the following;
- a single Supermircro chassis with 23 x 3TB SATA disks in + 1x256GB SSD, with a dual socket motherboard and about 128GB ram. All drives are on a single LSI 2008 controller, there are 2 rear facing 128GB SSDs.
- we install VMware onto the mirrored set of the 2 rear facing SSDs.
- We run a virtual machine for OmniOS with the 23 3TB drivers and SSD as raw device mappings.
- We create a zpool undef OmniOS (Open Solaris fork now based on Illumos kernel and we use Napp-IT for easy UI) in the following format;
BackupPool
-vdev: raidz1-0, 5 x 3TB SATA
-vdev: raidz1-1, 5 x 3TB SATA
-vdev: raidz1-2, 4 x 3TB SATA
-vdev: raidz1-3, 4 x 3TB SATA
-vdev: raidz1-4, 4 x 3TB SATA
-l2arc cache drive: 256GB SSD
- 1 global spare.

each vdev is configured as a raidz (similar to raid 5).
the performance of the BackupPool is equivalent to 5 vdevs as writes are striped over the vdevs.
We have sync disabled and a couple of other tweaks which means writes are cached in ram for maximum performance.
- We expose a virtual 60TB block device via the comstar iscsi interface of OmniOS via a loopback connector to other VMs on the same VMware host.
- On the same VMware host we run a VM running windows 2016 Server with the iscsi initiator so that we can see a 60TB block device inside the OS and we can then format this as a ReFS file system in preparation for Veeam 9.5.
- On the same VMware host we also run the Veeam controller VM, and 4 separate Veeam Proxies so we can the parallelism for our workloads for maximum throughput.
- On the same host we also run other management layers like vcentre, SQL server DB for vcentre etc.
- Because all the veeam connectivity to the virtual iscsi device etc is via a loop back connection, we are not constrained by the throughput of the NICs.

It may seem an overkill but this configuration seems to deliver the permanence we need at a budget price point, so we are going to see how well it runs as we are starting to move our backup jobs to hit the ReFS file system. If Microsoft can address the shortfalls of storage pools (i.e. lack of global spares, lack of striping over pool sets, lack of equivalent l2arc and ram caches, then we'd move from OmniOS at an instant and just switch our storage layer to Windows - but as everything is a VM anyway - it is trivial to switch form one to the other).

If there is any chance someone could PM me a link to 9.5 for beta testing (or better the RTM), it would be great -as I'd be able to properly verify this configuration on 9.5.
ashleyw
Service Provider
 
Posts: 137
Liked: 16 times
Joined: Thu Oct 28, 2010 10:55 pm
Full Name: Ashley Watson

Re: VBR 9.5 - REFS

Veeam Logoby sanjaykrk » Wed Nov 02, 2016 11:32 pm

ksl28 wrote:From what i understood from the article, Veeam does not recommend using a VM to act as a off-host proxy node.

ksl28 wrote:I can easily understand why Veeam dosent want an VM to be off-host proxy, but its simply way to pricy,


I think apart from the implementation and dependency on Microsoft VSS (volume shadow service) , implementing an off-host proxy as VM will defeat one of the important purpose of keeping the Hyper-V host out of data-transfer path.
sanjaykrk
Influencer
 
Posts: 11
Liked: 3 times
Joined: Sat Apr 09, 2016 12:12 am
Full Name: Sanjay kumar

Re: VBR 9.5 - REFS

Veeam Logoby ashleyw » Thu Nov 03, 2016 2:04 am

one additional question around ReFS. We had to raise the ram footprint on the 4xVeeam engines we are running to 16GB ram each due to weekly roll ups causing ram bloat and failures on the transformation jobs.
Will the ReFS target mean we'll be able to drop the RAM footprint, or will the RAM still be used in the same way as before (I'd expect RAM requirements to be much lower due to the ReFS pointers to the previous incrementals)?
ashleyw
Service Provider
 
Posts: 137
Liked: 16 times
Joined: Thu Oct 28, 2010 10:55 pm
Full Name: Ashley Watson

Re: VBR 9.5 - REFS

Veeam Logoby Mike Resseler » Thu Nov 03, 2016 6:29 am

Ashley, from what we can see there is indeed less resources needed. We always take about the lesser I/O necessary, and the shorter time needed to perform the synthetic full due to the block-cloning API, but it will also require less RAM and CPU. Unfortunately, I can't give you a number (like 3 x less or something) on those resources. It will be (Certainly in the beginning) monitoring and baselining. The technology is rather new (but heavily tested by MSFT so no worries there...) so we will get better insight and numbers (thanks to you all :-)) in the future.

It is a very good question though, as I said, we focused heavily on I/O and transformation speed, but this is certainly worth knowing also!

Thanks for the question

Mike
Mike Resseler
Veeam Software
 
Posts: 2795
Liked: 343 times
Joined: Fri Feb 08, 2013 3:08 pm
Location: Belgium, the land of the fries, the beer, the chocolate and the diamonds...
Full Name: Mike Resseler

Re: VBR 9.5 - REFS

Veeam Logoby andyg » Thu Nov 03, 2016 11:27 am

Nice Post, we have similar hardware space but use ZFS on Linux though, so I'm curious to see what you think of Win 2016 and ReFS as a replacement to ZFS. (do you have a blog or any guides you follow to get your ZFS setup?)

ashleyw wrote:well there are some problems with using Windows storage spaces as a backup target...
Windows does not allow for striping across multiple storage pools within the same OS instance - so the performance is limited to a single storage pool.
Also the concept of global hot spares for multiple storage pools is missing form windows storage spaces - the spares need to be dedicated to a storage pool.
ZFS allows 1 or more SSDs to be used as an l2arc cache on z zpool (ie read cache), and allows for striping across multiple vdevs (each of which is in effect a raid group).
-= VMCE v9 certified =-
andyg
Service Provider
 
Posts: 54
Liked: 4 times
Joined: Wed Apr 23, 2014 9:51 am
Full Name: Andy Goldschmidt

Next

Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Google [Bot] and 12 guests