Hey,
maybe I missed it somewhere (hopefully not), but is it possible to cache the tape content on the tape proxy, before it's written to the tape itself?
Situation:
We have 2 backupservers, one on site and the other in the remote site. The tape library is connected at the remote backupserver. I run tape jobs now from 1st backupserver to the 2nd one, to place directly on tape. We are connected with 850Mbit between the servers. So backupspeed is comparable slow to possible speed of the tape unit. And it's not healthy for the tape unit - waiting for data, writing, waiting for data, writing...
And there are some other jobs, which transfer through the site2site-link and reduce the line speed too. (storage replikation, copy jobs of veeambackups)
so, is it possible to cache an amount of the tape on the remote backupserver, that it allows to write the full tape at once?
LTO6 = 2,5TB (160MB/sec)
linespeed ~ 0,5Gbit/sec (50MB/sec)
cache? ~ 1,5TB.
So the tapejob can start, when cache is full, and start writing the tape. And during the writeprocess, the cache refills continuously.
regards
kl_rul
-
- Influencer
- Posts: 10
- Liked: 1 time
- Joined: Feb 05, 2016 11:34 am
- Contact:
-
- Expert
- Posts: 193
- Liked: 47 times
- Joined: Jan 16, 2018 5:14 pm
- Full Name: Harvey Carel
- Contact:
Re: Feature Request: Tape proxy for caching data
Hey kl_rul (king K. Rool is how I want to read it
)
I think you need to make a Backup Copy to the remote site first, then do a Tape Job from the Backup Copies. As long as the Tape Server and the Backup Copy Repository share the same server, it will negate the need to move data around servers.
I don't think Veeam has a way to accomplish what you're looking for currently otherwise though, but happy to be proven wrong.

I think you need to make a Backup Copy to the remote site first, then do a Tape Job from the Backup Copies. As long as the Tape Server and the Backup Copy Repository share the same server, it will negate the need to move data around servers.
I don't think Veeam has a way to accomplish what you're looking for currently otherwise though, but happy to be proven wrong.
Who is online
Users browsing this forum: Amazon [Bot] and 82 guests