I was previously backing up to a Drobo NAS and while this worked I felt it was too slow, especially for large restore jobs. So I decided to try other options to speed up my backups and I have succeeded, however wanted to get opinions on if my implementation is the best way.
Currently, using RDM connection I am getting backup speeds of 70-150MB/s average with bursts to several hundred MB at times (at times > 250MB/s)
I have a single Veeam B&R server running as a VM in vSphere 6. All of my VMs run on a Dell VRTX server, which has 24 slots available for 2.5" storage disks. I use four of these slots for the main vm storage RAID. I added in six other disks in available slots and configured this as a RAID5 LUN which will only be used as a Veeam backup repository. So in my Veeam vm I sought out the best/fastest way to connect to this storage on my server.
What I ended up doing is adding the RAID5 LUN "directly" to the Veeam vm as an RDM drive (Raw Device Mappings) using vSphere. This adds a new and separate SCSI controller to the vm and uses this to connect directly to the Dell VRTX RAID LUN. Using this configuration I'm consistently getting at least ~100MB/s throughput for backups which is a *huge* improvement for me, however I'm wondering if using this RDM setup is best? As far as I know the vm sees the backup disk as "Local Storage" as I'm not using iSCSI or any other protocol to connect. However looking at the table for Transport Modes here:https://helpcenter.veeam.com/docs/backu ... tml?ver=95
It seems to suggest that in order to use Direct Storage Access mode (the fastest) that I should be using NFS or maybe iSCSI since Veeam is hosted in a vm. Right now I'm thinking of just leaving my current config as is as throughput seems really fast but I do want to stick with best practices. And furthermore I'm unsure how to tell which Transport Mode my proxy is using--I have it set to choose automatically and can't figure out where in the logs it would tell me which mode it is actually choosing for my RDM drive.
Does anyone have thoughts on optimizing a similar setup? What kind of throughput are you getting using NFS or iSCSI? (and yes I'm aware this depends heavily on hardware).