-
- Novice
- Posts: 9
- Liked: never
- Joined: May 30, 2014 10:45 pm
- Contact:
Direct SAN - Production SAN to --> Backup Repository SAN
Hi,
I'm pretty well read on Direct SAN configuration and I believe I fully understand how the proxy obtains the data from production backup, but I have some confusion in my setup that I need some clarification on please. I cannot find any documentation on this, much of it is vague or describe as "Veeam Magic" .
I'll simplify the deployment to still get across the same message.
ESXi 5.5 Host, Running two separate VMs
- One VM (1) Veeam Backup Server + Repository Roles
- 1 vNIC, the physical switch this is connected to is 1GbE, used for internal .local domain join / networking - IP 172.27.1.100
Originally this is all I had was this single NIC. I then just mounted LUNS in ESXi, carved out a VMFS volume, and then attached a HDD(.VMDK) to the Backup Server and attached it to the OS as a drive letter, call it D:\Backup for the final backup resting place
Question 1: I was told this was not ideal, that if I am mounting drive letters through ESXi, The VEEAM backup server has to go through vCenter and a 1GB connection to send data to D:\Backup. Can someone elaborate? I was told it's better to add another vNIC to this VM on the 10GbE network. Similar to the Proxy. Setting up an iSCSI Initiator to target 192.168.2.20 (backup SAN). Then mounting these luns to the Backup server OS and creating a drive letter
- One VM (1) Veeam Proxy
- 1 vNIC, the physical switch this is connected to is 1GbE, used for internal .local domain join / networking - IP 172.27.1.200
- 1 vNIC, the physical switch this is connected to is 10GbE, I have an iSCSI initiator configured to point to the production SAN(192.168.1.10). I also have the VMFS volumes viewable in diskmgmt.msc, of course "offline"
My understanding is, this is a "Direct SAN" connection, the Proxy can now traverse the 10GbE fabric and pull VM data to backup.
One (1) Production SAN (Where VMs that are being backed up are located) Target iSCSI IP: 192.168.1.10
One (1) Storage SAN (Where VM backups will ultimately end up) Target iSCSI IP: 192.168.2.20
Question 2: I understand how VEEAM pulls VM data directly from the 10GbE storage fabric (via the proxy), I do not understand how to keep this data on the 10GbE storage fabric when the proxy is sending that data to its final resting place a separate backup SAN.
If I set it up this way would a backup properly work as follows?
Backup Starts --> Proxy Reaches Out to Production SAN 192.168.1.10 via it's 10GbE direct connection --> Proxy Sends Data the Backup Repository 192.168.2.20 via it's 10GbE direct connection. I don't get how the proxy is sending backup data to the backup repository they are two separate servers.
Thanks so much for the help!
I'm pretty well read on Direct SAN configuration and I believe I fully understand how the proxy obtains the data from production backup, but I have some confusion in my setup that I need some clarification on please. I cannot find any documentation on this, much of it is vague or describe as "Veeam Magic" .
I'll simplify the deployment to still get across the same message.
ESXi 5.5 Host, Running two separate VMs
- One VM (1) Veeam Backup Server + Repository Roles
- 1 vNIC, the physical switch this is connected to is 1GbE, used for internal .local domain join / networking - IP 172.27.1.100
Originally this is all I had was this single NIC. I then just mounted LUNS in ESXi, carved out a VMFS volume, and then attached a HDD(.VMDK) to the Backup Server and attached it to the OS as a drive letter, call it D:\Backup for the final backup resting place
Question 1: I was told this was not ideal, that if I am mounting drive letters through ESXi, The VEEAM backup server has to go through vCenter and a 1GB connection to send data to D:\Backup. Can someone elaborate? I was told it's better to add another vNIC to this VM on the 10GbE network. Similar to the Proxy. Setting up an iSCSI Initiator to target 192.168.2.20 (backup SAN). Then mounting these luns to the Backup server OS and creating a drive letter
- One VM (1) Veeam Proxy
- 1 vNIC, the physical switch this is connected to is 1GbE, used for internal .local domain join / networking - IP 172.27.1.200
- 1 vNIC, the physical switch this is connected to is 10GbE, I have an iSCSI initiator configured to point to the production SAN(192.168.1.10). I also have the VMFS volumes viewable in diskmgmt.msc, of course "offline"
My understanding is, this is a "Direct SAN" connection, the Proxy can now traverse the 10GbE fabric and pull VM data to backup.
One (1) Production SAN (Where VMs that are being backed up are located) Target iSCSI IP: 192.168.1.10
One (1) Storage SAN (Where VM backups will ultimately end up) Target iSCSI IP: 192.168.2.20
Question 2: I understand how VEEAM pulls VM data directly from the 10GbE storage fabric (via the proxy), I do not understand how to keep this data on the 10GbE storage fabric when the proxy is sending that data to its final resting place a separate backup SAN.
If I set it up this way would a backup properly work as follows?
Backup Starts --> Proxy Reaches Out to Production SAN 192.168.1.10 via it's 10GbE direct connection --> Proxy Sends Data the Backup Repository 192.168.2.20 via it's 10GbE direct connection. I don't get how the proxy is sending backup data to the backup repository they are two separate servers.
Thanks so much for the help!
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct SAN - Production SAN to --> Backup Repository SAN
1: what they told you makes no sense. The disk is attached to the VM via the ESXi storage stack, not using network connections. The network configuration of a VM has nothing to do with its storage configuration. Mounting the external iSCSI storage as an in-guest storage can give you two less layer of file system since you can format the external volume directly with NTFS instead of using VMFS + VMDK + NTFS, but the overhead is not caused by the network connection. Another solution then could also be to use RDM and mount the iscsi volume with it, so you do not have to use an additional nic in the VM to mount the backup storage.
2: if you mount the backup san directly inside the virtualized Veeam server and configure it as D: drive and as a repository, then the same server is also acting as a repository, there is no separated server.
Luca.
2: if you mount the backup san directly inside the virtualized Veeam server and configure it as D: drive and as a repository, then the same server is also acting as a repository, there is no separated server.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 30, 2014 10:45 pm
- Contact:
Re: Direct SAN - Production SAN to --> Backup Repository SAN
Thank you very much for the response, this is all in attempts to create the most efficient design.
What I dont understand is how does the Proxy ultimately get the data from the Production SAN with the VMs, to the Backup Repository connected to a seperate Backup SAN. By what means is this data being transferred between these two systems. I was trying to ensure this was all done on the fastest mediums.
Believe it or not, I was told not to use drive letters mounted from ESXi to the Guest VM Repository server by VEEAM Support...that's what I wanted to come on here for a second opinion.dellock6 wrote:1: what they told you makes no sense. The disk is attached to the VM via the ESXi storage stack, not using network connections. The network configuration of a VM has nothing to do with its storage configuration.
Sounds to me like i'd be fine with just using ESXi Storage stack, unless you think RDM or mounting external iSCSI storage would be more efficient?dellock6 wrote:Mounting the external iSCSI storage as an in-guest storage can give you two less layer of file system since you can format the external volume directly with NTFS instead of using VMFS + VMDK + NTFS, but the overhead is not caused by the network connection.
Another solution then could also be to use RDM and mount the iscsi volume with it, so you do not have to use an additional nic in the VM to mount the backup storage.
I'm confused on this. I will have two servers 1) Veeam Proxy with Production SAN mounted directly that pulls the VM data 2) Veeam Backup Server + Repository server, which above you said was fine just using ESXi Storage Stack, but here, you're talking about me adding a second NIC and mounting my backup san directly inside the repository server. Which is what I was originally suggested.dellock6 wrote:2: if you mount the backup san directly inside the virtualized Veeam server and configure it as D: drive and as a repository, then the same server is also acting as a repository, there is no separated server.
What I dont understand is how does the Proxy ultimately get the data from the Production SAN with the VMs, to the Backup Repository connected to a seperate Backup SAN. By what means is this data being transferred between these two systems. I was trying to ensure this was all done on the fastest mediums.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct SAN - Production SAN to --> Backup Repository SAN
Wait, there is some confusion here, I never said to add a second nic...
Let's try step by step: you are now saying you are going to have two veeam servers, a dedicated proxy and the veeam server also acting as repository. First, are they going to be both virtual machine?
Let's try step by step: you are now saying you are going to have two veeam servers, a dedicated proxy and the veeam server also acting as repository. First, are they going to be both virtual machine?
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 9
- Liked: never
- Joined: May 30, 2014 10:45 pm
- Contact:
Re: Direct SAN - Production SAN to --> Backup Repository SAN
Hey dellock, yes, that's correct, two veeam servers, both VMs running in ESXi. If I want to connect to my 10GbE switches, that would require a second NIC. For internet connectivity / and connectivity to my internal .local domain, each VM by default will have 1 vNIC on a 1GbE physical network.dellock6 wrote:Wait, there is some confusion here, I never said to add a second nic...
Let's try step by step: you are now saying you are going to have two veeam servers, a dedicated proxy and the veeam server also acting as repository. First, are they going to be both virtual machine?
Trying to figure out how to keep backup data on 10GbE from start to finish between the proxy and repository VMs.
Would it be....
Server 1: Proxy
vNIC0: 1GbE 172.x.x.x
vNIC1: 10GbE 192.168.1.x iSCSI Init, Offline VMFS Volumes from Production SAN
Server 2: Repository/Backup Server
vNIC0: 1GbE 172.x.x.x
vNIC1: 10GbE 192.168.2.x (different sub net then Production SAN) iSCSI Init. online volumes from Backup SAN, created as drive letters D:\backup E:\backup F:\backup etc.
Then the confusion sets in, well if this is right, by what means is the Proxy sending this data from the Production SAN to the Backup SAN.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Direct SAN - Production SAN to --> Backup Repository SAN
Honestly, the first level of confusion is trying to use DirectSAN with a virtual proxy, since data uses the same network connection multiple times to go back and forth from the storage to the proxy and again out to the repository if the second VM is not in the same host.
Anyway, the better way to keep all the traffic on the 10GbE is to have a VM network on that link, and have all the Veeam component to have only a network connection on that link. If you need to connect with additional systems to the Veeam components, you will have to use routing rules between the "veeam network" and the management network.
I don't get your last line, the data flow is:
Retrieve data from Production SAN to proxy -> send compressed/dedupe data to repository -> save data over the network to the Backup iscsi SAN
So there is indeed some traffic going around, I would not expect anything else.
Anyway, the better way to keep all the traffic on the 10GbE is to have a VM network on that link, and have all the Veeam component to have only a network connection on that link. If you need to connect with additional systems to the Veeam components, you will have to use routing rules between the "veeam network" and the management network.
I don't get your last line, the data flow is:
Retrieve data from Production SAN to proxy -> send compressed/dedupe data to repository -> save data over the network to the Backup iscsi SAN
So there is indeed some traffic going around, I would not expect anything else.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: No registered users and 35 guests