-
- Service Provider
- Posts: 453
- Liked: 30 times
- Joined: Dec 28, 2014 11:48 am
- Location: The Netherlands
- Contact:
NFS datastore or CIFS as a target repository
Hi there,
Currently we are reviewing best practices for using CIFS or NFS as a back-up repository. Currently we have a virtual back-up environment running on a high performance storage array. All other test and dev virtual machines on this array we backup to another storage device ( qumulo ) that is taking another copy to another storage device ( qumulo ).
We are reviewing using a gatewayserver for a CIFS repository, however also rethinking deploying a repository server on the high-performance tier and adding a ReFs disk based on a virtual disk of this repo server from the NFS datastore within VMware.
What would be a best fit for this scenario ?
thanks
Currently we are reviewing best practices for using CIFS or NFS as a back-up repository. Currently we have a virtual back-up environment running on a high performance storage array. All other test and dev virtual machines on this array we backup to another storage device ( qumulo ) that is taking another copy to another storage device ( qumulo ).
We are reviewing using a gatewayserver for a CIFS repository, however also rethinking deploying a repository server on the high-performance tier and adding a ReFs disk based on a virtual disk of this repo server from the NFS datastore within VMware.
What would be a best fit for this scenario ?
thanks
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: NFS datastore or CIFS as a target repository
Hello,
In general, the Windows-based repository with REFS disk would be a definitely better choice from performance perspective, it also supports Fast Clone feature that drastically increases speed of synthetic operations.
Thanks!
In general, the Windows-based repository with REFS disk would be a definitely better choice from performance perspective, it also supports Fast Clone feature that drastically increases speed of synthetic operations.
Thanks!
-
- VP, Product Management
- Posts: 7081
- Liked: 1511 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: NFS datastore or CIFS as a target repository
Special circumstances here. Qumulo is a scale out NAS that scales over multiple nodes. If I understand it correctly load balancing accross nodes is done over DNS.
The Veeam Repository (or Extent) will get from DNS the node to work with and then will process everything over this node. This is the case for NFS and CIFS.
I don´t know about their VMware implementation for a single datastore. Not sure if they spread the load accross multiple nodes for a single vmdk disk. You need to check with Qumulo or read through their Vmware best practices.
v12 will bring as well the option to select multiple Gateway Server that can interact with NFS/CIFS Repositories, this means multiple servers work with the share and so multiple nodes will be used.
Currently I would use the following:
- Create multiple folder on an NFS share.
- Create multiple servers and use them as Gateway Servers for an SOBR extent that points to one of the folders. By this each server will likely get randomly another Qumulo node to work with and you spread the load.
- Create a SOBR out of the extents and use it with Active Full + Incremental. Potentially Synthetic Full + Incremental. By nature of Scale Out storages the synthetic operations cause massive east-west traffic within the cluster and increase latency. On the other side the cluster is really fast for Active Full + Incremental processing (spread the Active Full accross 7 days on different jobs).
- NFS is used here as the protocol itself is more reliable (nothing to do with Qumulo).
Compare this approach with the use of a NFS datastore under VMware on the same Storage when you place a VM disk there. Monitor what is happening during backup on the Qumulo cluster and where the traffic is going to see if it spreads the load. This approach would have the advantage of using fast cloning to avoid synthetic operations and active full processing => Incremental IO on target disk only.
The Veeam Repository (or Extent) will get from DNS the node to work with and then will process everything over this node. This is the case for NFS and CIFS.
I don´t know about their VMware implementation for a single datastore. Not sure if they spread the load accross multiple nodes for a single vmdk disk. You need to check with Qumulo or read through their Vmware best practices.
v12 will bring as well the option to select multiple Gateway Server that can interact with NFS/CIFS Repositories, this means multiple servers work with the share and so multiple nodes will be used.
Currently I would use the following:
- Create multiple folder on an NFS share.
- Create multiple servers and use them as Gateway Servers for an SOBR extent that points to one of the folders. By this each server will likely get randomly another Qumulo node to work with and you spread the load.
- Create a SOBR out of the extents and use it with Active Full + Incremental. Potentially Synthetic Full + Incremental. By nature of Scale Out storages the synthetic operations cause massive east-west traffic within the cluster and increase latency. On the other side the cluster is really fast for Active Full + Incremental processing (spread the Active Full accross 7 days on different jobs).
- NFS is used here as the protocol itself is more reliable (nothing to do with Qumulo).
Compare this approach with the use of a NFS datastore under VMware on the same Storage when you place a VM disk there. Monitor what is happening during backup on the Qumulo cluster and where the traffic is going to see if it spreads the load. This approach would have the advantage of using fast cloning to avoid synthetic operations and active full processing => Incremental IO on target disk only.
-
- Veeam Software
- Posts: 688
- Liked: 150 times
- Joined: Jan 22, 2015 2:39 pm
- Full Name: Stefan Renner
- Location: Germany
- Contact:
Re: NFS datastore or CIFS as a target repository
If I read correctly that would mean that your repo server uses a VMDK on a NFS datastore. Please keep in mind that in case your whole vSphere environment goes down for whatever reason also the access to the repo will get lost.
As Petr wrote, using a ReFS (or even better a linux based XFS with immutability) would be best from a performance perspective but running it on the same environment as the source and depending on the same technology (VMware) may lead into an issue over time.
Andreas already stated the process for NFS. Nevertheless please carefully POC it as those large scale NAS devices (built to store lots of NAS data but not primarily built to be a backup target) sometimes don't fit the SLA need (nothing specific to Qumulo but in general).
I also agree NFS would be the better choice from a protocol standpoint.
Feel free to share your findings and decision at the end as other may have the same question.
Stefan Renner
Veeam PMA
Veeam PMA
-
- Enthusiast
- Posts: 47
- Liked: 6 times
- Joined: Feb 05, 2022 11:16 am
- Contact:
Re: NFS datastore or CIFS as a target repository
If Qumulo supports full SMB 3.0 CA (continuous availability) you might want to look into this as a "proper" SMB repo.
I am in a similar situation with Dell Isilon. NFS based repositories throw errors at us from time to time (reason not found yet bit my gut feeling points me in direction of locking errors in connection with multi-node scale out).
SMB 3.0 CA delivers almost rhe same throughput (about 3% less - irrelevant in multi-GB/s environment) and works without flaws. Even a node cold reset during backup write is survived thanks to CA.
Shared/central production vSphere (where the to-be-backed-up VMs reside) datastore on NFS used to store disks for a ReFS/XFS repo VM? technically works - flawless. But what if your virtual environment breaks/gets hijacked/whatnot and the NFS datastores holding your repo-VM are gone? Not my type of risk...
Standalone ESXi only holding a repo-VM (immutable XFS?) where ESXi and repo-VM Mgmt is in no way entangled or reachable from shared/centeral production vSphere (where the to-be-backed-up VMs reside) with datastore used to store disks for said repo-VMr? Would be a much less risky approach, but still technically more complex than a simple NFS or SMB (CA!) repo...
I am in a similar situation with Dell Isilon. NFS based repositories throw errors at us from time to time (reason not found yet bit my gut feeling points me in direction of locking errors in connection with multi-node scale out).
SMB 3.0 CA delivers almost rhe same throughput (about 3% less - irrelevant in multi-GB/s environment) and works without flaws. Even a node cold reset during backup write is survived thanks to CA.
Shared/central production vSphere (where the to-be-backed-up VMs reside) datastore on NFS used to store disks for a ReFS/XFS repo VM? technically works - flawless. But what if your virtual environment breaks/gets hijacked/whatnot and the NFS datastores holding your repo-VM are gone? Not my type of risk...
Standalone ESXi only holding a repo-VM (immutable XFS?) where ESXi and repo-VM Mgmt is in no way entangled or reachable from shared/centeral production vSphere (where the to-be-backed-up VMs reside) with datastore used to store disks for said repo-VMr? Would be a much less risky approach, but still technically more complex than a simple NFS or SMB (CA!) repo...
-
- Enthusiast
- Posts: 47
- Liked: 6 times
- Joined: Feb 05, 2022 11:16 am
- Contact:
Re: NFS datastore or CIFS as a target repository
And a P.S: If v12 really will deliver with regards to multiple Gateway Servers interacting with one Repo (in parallel I hope), then having to deal with SOBR (one extent per Scale-Out Cluster node) just to get stream/thread concurrency to multiple nodes in a Scale-Out Cluster would not be neccessary anymore... -> One ordinary NFS/SMB Repo where every Backup Proxy also acts as Gateway Server, that will be KISS principle at its best, without compromising on possible max performance...
Who is online
Users browsing this forum: No registered users and 17 guests