Thanks to all of you for sharing your performance reports and support cases - they helped us to identify the bottlenecks in Veeam B&R code. As a result, now we have a new optimized engine in v9.5!
File to Tape performance has improved by up to 50x when processing large amounts of very small files, making it up to 50% faster than leading legacy tape backup solutions on the same workload. The new engine was tested with 20 million files per job, enabling users to efficiently protect unstructured data to tape or VTL targets.
Other optimizations. Additional optimizations include multiple, under-the-hood enhancements improving stability and performance of GFS archival, parallel processing, tape encryption, file level recovery and catalog operations.
That said all the tape jobs (file to tape, backup to tape and GFS to tape) were optimized to work as fast as possible. Feel free to test tape performance enhancements and do not forget to share the results.
What is the estimated time for building the filetree for the 20 million files that have been used in the test? I'm sitting here waiting for half an hour for about 200k files.
Will try to update this thread with the results tomorrow. Meanwhile, please clarify are you using bundled SQL Express for Veeam B&R DB and what tape proxy server specifications are (RAM/CPU)? Thanks.
Bundles SQL Express, yes.
Tape server has 64GB RAM and 1 Xeon E5-2620.
The tree build for an incremental backup of 1.4 million files is about 2.5hours. The 250 changed files are backup in 40 seconds then.
In the initial full backup, the build only took 30 minutes.
Anything new on performance comparisons? I would like to know if I can improve something and if this is a viable way of getting one year of daily incrementals from a fileserver on tape for faster restores.
General recommendation is to use a SQL standard edition, because default local express database can be a bottleneck for file to tape jobs due the memory per instance limitation of 1 GB. Another advice is to use locally attached storage.
That's the complete backup time I assume? I have alone 2.75h just for building the file tree with 1.4 million files with the daily incrementals. The initial full build was much faster. Target and source storage is all 8GB FC, but I don't see any bottleneck. It's just slow. Some CPU load on the fileserver with the backup agent collecting data, but not that much.