OH! Before I forget, direct to tape - AT SPEED - would be a necessity as well if that's possible please. I don't know if that expectation is realistic, because we found Backup Exec direct to tape speed was abysmal despite B2D2T being able to run great (remote storage --> tape vs. tape server --> tape). I'm finding similar results w/ Veeam as well; if we do a file to tape job from storage in the same rack copy files to tape the speed is slow (~50-something MBps), but if we have a file to tape or backup job that pulls the source data from the tape server locally to tape it's nice and fast (LTO-6, ~140-155MBps).
When you have 10Gbps storage fabric, 4Gbps LACP connection to your tape backup server (even w/ a single data stream going over one of the 1Gbps NICs), LTO-6/1.2Gbps via 6Gbps SAS, end user expectations on throughput are high. In theory Veeam should be able to pipe data from your storage fabric all the way through to that tape drive at full speed. For whatever reason Backup Exec couldn't do it - and support did confirm that, and Veeam v8 while does do better (~50MBps vs. BE's ~40MBps pulling from remote storage direct to tape), it should be able to do ~140-150MBps "pass-through" since it can run this full speed if the tape server is piping it from itself to tape. Of course we do expect some overhead somewhere, but ~50MBps vs. LTO-6's 160MBps becomes unusable when we get into no local LZ big enough to temporarily hold data, we need a tape cut fast, etc.
One other thing I haven't seen it mentioned, but if Veeam isn't aware of shoe shining, back hitching, etc. please do read up. Getting maximum throughput up to your tape drive is vitally important for a number of reasons please!