I'm Mike with AWS support and I'm here to help.
From the previous communications that I have read through, please let me clear up some of the details and then add a new recommendation to get your VTL gateway to perform well.
First, I've attached a VEEAM/AWS VTL deployment white paper. This is the most recent version of the document and should help you get your 'unknown medium changer' changed over to the proper 'IBM ULT3580-TD5 SCSI Seq. Device' with an up-to-date driver. The details start on page 23 of the document.
Fixing the driver, however, will probably have little affect on performance. You mentioned creating a bigger upload buffer, which can be helpful, but you need to understand the relationship between the cache vs. upload buffer before making changes. The cache disk is equally important to good performance as the upload buffer. On page 5 of the whitepaper, there is a formula for determining proper upload buffer size. What the document doesn't state as clearly is that the cache should be 10% bigger than the upload buffer. The reason for this is that all data traffic goes through the cache, both upload and download. The upload buffer is sized to "catch" the initial rush of data that your clients send, but then the data is copied to the cache for upload. Because the gateway is rapidly copying data from the buffer to the cache, we recommend keeping these two resources on separate disk spindles to maximize performance.
You can also get more performance by adding vCPUs to the gateway. You currently have the minimum (4), but 16 vCPUs is usually needed to maximize upload throughput. You can see this and other performance recommendations here: https://docs.aws.amazon.com/storagegate ... ommon.html.
Lastly, the way you configure your jobs in Veeam can have a major impact in upload performance. Each virtual tape is limited to 30MB/s of upload throughput. To get up to the gateway maximum of 120MB/s, you need to have 4 virtual tapes running simultaneously. These can be 4 jobs each pointing to a single tape, or one job with multiple threads hitting multiple tapes. We have found that 3 tapes is the sweet spot for good performance, but feel free to experiment to find what works best in your environment.
Thank you for contacting us back again.
Looking into this further, I see that under ideal conditions when there is robust hardware, robust bandwidth, good net flow and with large sequential read being performed on the data for the download then you could typically get up to 20 MB/s . Additionally, if the data read is random and in small amounts then you could see degraded performance. Robust hardware would include the following:
1) You could try adding high performance disks such as solid-state drives (SSDs) and a NVMe controller that could help you achieve maximum download rate. NVMe would help you in latency
and IOPS. Improved disk performance overall generally results in better throughput and more input/output operations per second (IOPS) .
2) As mentioned by the previous engineer, you could try adding more vCPUs to the gateway. 16 vCPUs is usually needed to maximize upload throughput .
3) The default block size for tape drive is 64 KB, you could also try optimize the I/O by increasing the block size for the tape drive. You could try incrementing the block size either 128 KB or
256 KB or 512 KB. The size would be directly proportional to the block size limit for your backup software. Changing this value could improve the I/O performance .
4) Lastly, you could also try uploading the data in different tapes and then you could have a storage gateway dedicated for each tape. So now when you restore the data you could read the
data simultaneously from all the tapes where the data is been uploaded. This would give you an optimized performance as you could read the data from different tapes at once.
Should, you have more questions please feel free to reach back to us.
Users browsing this forum: No registered users and 4 guests