I've build this and it works quite well. I'm just wondering how the data flows through the various components. I've setup the following:foggy wrote:Regarding the vPower service, it does not need to run on the repository server itself. You simply select any Windows server (typically the Veeam server or a Windows proxy server) to act as the vPower NFS frontend for the Linux repository.
- Veeam B&R 6.5
- A VMware virtualized Veeam Master Server, acting only as the master and it has Veeam NFS enabled. 2 vCPU and 2GB vRAM
- A whole bunch of virtualized, windows based Proxies using HotAdd.
- A physical, powerful ZFS Based NFS Appliance exporting a multi-TB NFS share to the Linux repository VM. ZFS Compression and Dedup are enabled on this NFS exported Volume.
- A Linux VM with 1 vCPU and 4GB vRAM, acting as the central repository for all Proxies. It has the before-mentioned NFS export mounted with the exact same mount-options as would be used with a DataDomain machine.
- Gigabit Ethernet connectivity
- VM's that are backed-up are not compressed or deduped by Veeam. (i want to benefit from Dedup accross all VM's and not just of the VM's inside a single job).
- Backup speeds of large, running VM's with Thick VMDK's is around 65MB/s.
- Restore speeds using "Entire VM" are about the same.
- Storage vMotion speeds of a running VM that is "instant restored" and then moved to a production SAN is about 40 to 45 MB/s.
Performance is not bad. A VM that is "Instant Restored" performs acceptable. It's much slower than normal but it's not sluggish.
It is clear to me that the 4GB memory in the Linux Repository machine holds a nice cache, acting as a performance-boost.
The Veeam Master VM that runs the Veeam NFS Service is the one that ESXi servers mount to, and it's where the "instant restore VM" is actually running from, from ESXi's point of view. But...
...the question i have is: how does the traffic flow exactly, end-to-end? Via which machines and using what protocols?
It's clear that it goes from the physical NFS Appliance to the Linux "Repository" (Repo) VM (which mounted the NFS Export). But then? How do the data (disk-blocks) get to the Veeam NFS Service machine? Which protocol?
Using NetStat on the Linux Repository VM, i see 8 TCP sessions going from the Repo VM to the Veeam NFS VM, all SSH.
In short, how does it work exactly? (please be technical).
Can the performance be enhanced by increasing the number of those SSH connections? Can i do anything else? (10Gbit Ethernet is not available to us). The individual components can saturate a Gigabit pipe easily.