We have a 5-node S2D Hyper-Converged Cluster with 200TB of storage as CSV1...
On this cluster, only 4 VMs are running: Veeam Cloud Connect CG1, CG2, SPC and Veeam VBR itself.
We're on v10a.
The VBR has 2 VHDx dedicated to repository, drive D: and E:. Inside VBR, I assigned it as a SOBR.
Even while the first is only using ~35TB, and "Data Locality" is set on the SOBR, I see our jobs
coming in (~7.5TB each) getting spread over both D: and E:. This results in "partial Fast Clone" operations.
SInce the Data Locality setting doesn't seem to work properly, could I do the following,
to get rid of the limitations of 64TB VHDx + SOBR ?
Please do note all nodes have 2x 40GB Infiniband connectivity with SMB multichannel/RDMA capability.
- I create a SOFS (Scale Out File Server) on the cluster.
- I share a folder on CSV1, using the Continously Availble flag
- Inside the Cloud Connect VBR, I create an SMB-repository to that share.
This way, the VBR-instance gets a repository of almost the entire 200TB at once.
Also, using guest RDMA inside the VBR-VM, connectivity would be at roughly 2x 40GB SMB multi-channel vRDMA
to that repository this way... (If I tell VBR it should be the gateway-server on the repo)
I would still setup the repository as a SOBR, as on the cluster, each now has a JBOD of 12 disks,
but in the future we could conect a 2nd JBOD each, effectively doubling capacity. I would then simply
create another CA-share as second repository, joined as one using SOBR.
Question though is, would it be supported by Veeam to do it this way ?
OR, I read somewhere else here on the forum that you could install the Backup-Proxy role onto each node,
and then make them highly available on the Failover-cluster. This way, I could present the CSV to VBR as
DAS-Storage. However, I can't find any information on how to do this.
Also, I guess since then the proxy is on the Node itself, SMB MultiChannel and vRDMA are not getting used, just
1 NIC (40Gbit) TCP.
What would be the best recommended way, if any of this at all is possible to begin with ?
-
- Service Provider
- Posts: 50
- Liked: 15 times
- Joined: Nov 15, 2016 3:38 pm
- Full Name: Bart van de Beek
- Contact:
-
- Product Manager
- Posts: 14839
- Liked: 3085 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Hyper-V S2D SOFS as Repository
Hello,
We had long discussions about SOFS + S2D with no final outcome as far as I remember. I'm on the side "keep it simple and go with single server installations". I'm on the side that SOFS for applications is only supported for Hyper-V and SQL from Microsoft side. So a normal SMB share would be a workaround. Performance has also been a long term issue with S2D (not to confuse with storage spaces) as long as cache is not large enough.
I don't know, what the rest of your infrastructure is. But I would put the VBR roles into VMs and use the S2D cluster as normal physical Windows machines. Again, I'm fan of "keep it simple" and S2D is the opposite of "simple".
Actually I'm not sure about your proxy questions. As far as I see, you are only receiving backups from your customers? For Hyper-V backups, it is recommended to go with the on-host proxy, yes. The on-host proxy is installed automatically if you add a Hyper-V server https://helpcenter.veeam.com/docs/backu ... ml?ver=100
Best regards,
Hannes
PS: I deleted the duplicate post in the service provider forum
there are reg keys that can help in several situations. Did you already talk to support (if yes, which case number) to fix that issue?SInce the Data Locality setting doesn't seem to work properly, could I do the following,
We had long discussions about SOFS + S2D with no final outcome as far as I remember. I'm on the side "keep it simple and go with single server installations". I'm on the side that SOFS for applications is only supported for Hyper-V and SQL from Microsoft side. So a normal SMB share would be a workaround. Performance has also been a long term issue with S2D (not to confuse with storage spaces) as long as cache is not large enough.
I don't know, what the rest of your infrastructure is. But I would put the VBR roles into VMs and use the S2D cluster as normal physical Windows machines. Again, I'm fan of "keep it simple" and S2D is the opposite of "simple".
Actually I'm not sure about your proxy questions. As far as I see, you are only receiving backups from your customers? For Hyper-V backups, it is recommended to go with the on-host proxy, yes. The on-host proxy is installed automatically if you add a Hyper-V server https://helpcenter.veeam.com/docs/backu ... ml?ver=100
Best regards,
Hannes
PS: I deleted the duplicate post in the service provider forum
-
- Service Provider
- Posts: 50
- Liked: 15 times
- Joined: Nov 15, 2016 3:38 pm
- Full Name: Bart van de Beek
- Contact:
Re: Hyper-V S2D SOFS as Repository
Yes I did talk to support, and it's not an issue and it's resolved already. Maybe I shouldn't have mentioned it. In all fairness I think I'm asking a valid question here, so "keep it simple" is no solution nor answer to the question. A "normal" SMB share is not even supported by Veeam, as you can clearly read when creating an SMB repository (it says "Only recommended for Continously Available Shares". A normal SMB share is not CA, only a SOFS-Share is. The infrastructure I'm referring to here is a 4-node cluster DEDICATED to Cloud Connect for Offsite Backups of our customers. And yes, the only thing they run in terms of VMs are the CC-VMs. 2 Gateways, one SPC and VBR itself. My question here relates to the VBR. Right now we're running these 4 VMs on a stand-alone Host. This host was just intermediate in preparation of the cluster. Now the cluster is ready, I cannot get the VBR-VM onto it. The VBR has 2 VHDx, both 64TB in Size. It is this size that prevents them from moving over. Using Hyper-V Replica it fails, moving them over through shared-nothing migration it fails. Stopping the VM and moving the files manually works, except for the 2 VHDx that actually have grown as large as 57TB and 17TB in size (The VBR repository data). These VHDx are making all before mentioned methods fail. If I copy the files from explorer or robocopy/xcopy, you name it, I get: "There's not enough space on the destination". DOS-commands say: The parameter is incorrect. This has something to do with the files sizes !
So, then, since I probably will have to start over with all my customers offsite-backups, I prefer to avoid this for the future. Having the VBR-VM access it's repository right from the CSV directly would solve both the Data-locality issue and the size-limitations in the future, hence the question.
While I know a single server setup is best in many cases, for a Cloud Connect, you at least would want to have redundancy. Both for the VMs aswell as for the data. S2D provides that both.
So, then, since I probably will have to start over with all my customers offsite-backups, I prefer to avoid this for the future. Having the VBR-VM access it's repository right from the CSV directly would solve both the Data-locality issue and the size-limitations in the future, hence the question.
While I know a single server setup is best in many cases, for a Cloud Connect, you at least would want to have redundancy. Both for the VMs aswell as for the data. S2D provides that both.
-
- Product Manager
- Posts: 14839
- Liked: 3085 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Hyper-V S2D SOFS as Repository
Hello,
Agree, we recommend CA shares. But recommended and supported is something different. And continuous availability is also available on classic fileserver clusters (or Netapp etc.). It's not a "SOFS-only" feature.
Sure, your question is valid. I just answered on
Many people had similar ideas like you with SOFS as repository with SMB 3 and REFS block cloning. As I said: an endless discussion and I gave you my opinion (SQL and Hyper-V as only supported applications for SOFS from Microsoft side and a different design make less headache).
If standalone servers are not good enough, maybe Storage Replica (figure 3: Server-to-server storage replication using Storage Replica) could something to look at. I have no experience with that setup, I just heard about it.
Best regards,
Hannes
Agree, we recommend CA shares. But recommended and supported is something different. And continuous availability is also available on classic fileserver clusters (or Netapp etc.). It's not a "SOFS-only" feature.
Sure, your question is valid. I just answered on
And it's okay for me to disagree with my opinion on this topic.What would be the best recommended way [...]
Many people had similar ideas like you with SOFS as repository with SMB 3 and REFS block cloning. As I said: an endless discussion and I gave you my opinion (SQL and Hyper-V as only supported applications for SOFS from Microsoft side and a different design make less headache).
If standalone servers are not good enough, maybe Storage Replica (figure 3: Server-to-server storage replication using Storage Replica) could something to look at. I have no experience with that setup, I just heard about it.
Best regards,
Hannes
Who is online
Users browsing this forum: No registered users and 23 guests