Discussions related to exporting backups to tape and backing up directly to tape.
Post Reply
Geniek.73
Influencer
Posts: 15
Liked: 4 times
Joined: Sep 16, 2016 6:43 am
Full Name: Dariusz Tyka
Contact:

AWS VTL and Veeam B&R9.5U3 speed problem

Post by Geniek.73 »

Hi all,

I'm now on evaluation stage for AWS VTL and Veeam 9.5. It was configured as per Veeam best practise. VTL is detected as 'unknown medium changer' within Windows device manager.
B2T jobs were configured and are working fine. The problem is that maximum upload speed I can see is around 30MB/s. We have a direct connect conenction to AWS with 1Gbps and when I was testing max upload speed to AWS EC2 instance was around 110-120MB/s. As per AWS documentation: https://docs.aws.amazon.com/storagegate ... ape-limits maximum upload speed to tape gateway is 120MB/s. No upload/download limits are configured on tape gateway.
What upload speeds can I expect? What upload speeds do you have within your environments?

Dariusz
Dima P.
Product Manager
Posts: 14415
Liked: 1576 times
Joined: Feb 04, 2013 2:07 pm
Full Name: Dmitry Popov
Location: Prague
Contact:

Re: AWS VTL and Veeam B&R9.5U3 speed problem

Post by Dima P. »

Hello Dariusz,

Since virtual tapes are cached locally and then uploaded from gateway to AWS it looks like a connectivity issue. I suggest raising a case with Amazon folks to identify the root cause.
Geniek.73
Influencer
Posts: 15
Liked: 4 times
Joined: Sep 16, 2016 6:43 am
Full Name: Dariusz Tyka
Contact:

Re: AWS VTL and Veeam B&R9.5U3 speed problem

Post by Geniek.73 »

Hi,

I magaged to get 50MB/s upload speed per tape to AWS gateway by assigning local cache/upload buffer disks to mix of hdd/ssd drives.

also see reply below I got from AWS support:
Hello,

I'm Mike with AWS support and I'm here to help.

From the previous communications that I have read through, please let me clear up some of the details and then add a new recommendation to get your VTL gateway to perform well.

First, I've attached a VEEAM/AWS VTL deployment white paper. This is the most recent version of the document and should help you get your 'unknown medium changer' changed over to the proper 'IBM ULT3580-TD5 SCSI Seq. Device' with an up-to-date driver. The details start on page 23 of the document.

Fixing the driver, however, will probably have little affect on performance. You mentioned creating a bigger upload buffer, which can be helpful, but you need to understand the relationship between the cache vs. upload buffer before making changes. The cache disk is equally important to good performance as the upload buffer. On page 5 of the whitepaper, there is a formula for determining proper upload buffer size. What the document doesn't state as clearly is that the cache should be 10% bigger than the upload buffer. The reason for this is that all data traffic goes through the cache, both upload and download. The upload buffer is sized to "catch" the initial rush of data that your clients send, but then the data is copied to the cache for upload. Because the gateway is rapidly copying data from the buffer to the cache, we recommend keeping these two resources on separate disk spindles to maximize performance.

You can also get more performance by adding vCPUs to the gateway. You currently have the minimum (4), but 16 vCPUs is usually needed to maximize upload throughput. You can see this and other performance recommendations here: https://docs.aws.amazon.com/storagegate ... ommon.html.

Lastly, the way you configure your jobs in Veeam can have a major impact in upload performance. Each virtual tape is limited to 30MB/s of upload throughput. To get up to the gateway maximum of 120MB/s, you need to have 4 virtual tapes running simultaneously. These can be 4 jobs each pointing to a single tape, or one job with multiple threads hitting multiple tapes. We have found that 3 tapes is the sweet spot for good performance, but feel free to experiment to find what works best in your environment.
Unfortunately the mentioned whitepaper do not have any info related to 'unknown medium changer' so I suspect I schould leave it as it is.
My second question was about restore speed which is acording to AWS documentation limited to 20MB/s. One more answer from AWS support below:
Thank you for contacting us back again.

Looking into this further, I see that under ideal conditions when there is robust hardware, robust bandwidth, good net flow and with large sequential read being performed on the data for the download then you could typically get up to 20 MB/s . Additionally, if the data read is random and in small amounts then you could see degraded performance. Robust hardware would include the following:

1) You could try adding high performance disks such as solid-state drives (SSDs) and a NVMe controller that could help you achieve maximum download rate. NVMe would help you in latency
and IOPS. Improved disk performance overall generally results in better throughput and more input/output operations per second (IOPS) [1].

2) As mentioned by the previous engineer, you could try adding more vCPUs to the gateway. 16 vCPUs is usually needed to maximize upload throughput [1].

3) The default block size for tape drive is 64 KB, you could also try optimize the I/O by increasing the block size for the tape drive. You could try incrementing the block size either 128 KB or
256 KB or 512 KB. The size would be directly proportional to the block size limit for your backup software. Changing this value could improve the I/O performance [2].

4) Lastly, you could also try uploading the data in different tapes and then you could have a storage gateway dedicated for each tape. So now when you restore the data you could read the
data simultaneously from all the tapes where the data is been uploaded. This would give you an optimized performance as you could read the data from different tapes at once.

Should, you have more questions please feel free to reach back to us.

So their suggestion was to deploy multiple tape gateways and resotre data from multiple tapes at once to saturate the link speed. Which is not so convinient when you are in DR task to restore large amount of data :-)

rgrds

Dariusz
patrickwilson412
Enthusiast
Posts: 38
Liked: 5 times
Joined: Apr 18, 2018 8:29 pm
Full Name: Patrick Wilson
Contact:

[MERGED] Full Backup

Post by patrickwilson412 »

I ran my first tape backup to AWS yesterday. It was only a 6GB job but it took five hours to do. Does anyone have any suggestions for what I can do to speed that up?
DGrinev
Veteran
Posts: 1943
Liked: 247 times
Joined: Dec 01, 2016 3:49 pm
Full Name: Dmitry Grinev
Location: St.Petersburg
Contact:

Re: Full Backup

Post by DGrinev »

Hi Partrick,

Please review the discussion and the answer from the AWS support above, it should answer your question. Thanks!
Post Reply

Who is online

Users browsing this forum: No registered users and 15 guests