- Posts: 1
- Liked: never
- Joined: Nov 29, 2019 12:55 pm
- Full Name: Emanuele Gnali
We are copying to tape a local 24Tb filesystem consisting of about 12M files and 700K folders.
While the daily writing to tape of increments takes just few minutes, it's always preceeded by a "building protected object list" that usually lasts about 16 hours, althoug reading from disks at a sustained rate of 450Mb/s all this time...
The same dataset is backed up from a NetApp storage to local disk (at 120Mb/s GbE network throughput) with normal File Backup Job increments in less than 30 minutes, and no need of "building lists" whatsoever.
Unfortunately, for our archival purposes it's useless backing up the latter to tape, since it would record vblobs and not directly restorable files, while direct NAS to Tape backup is simply unfeasible, since it doesn't involve intermediate content "caching" to a disk (because of this we are syncing the NAS content to Backup Server local disks before the F2T job).
Are we hitting some miscofiguration?
Is some kind of optimization or option needed to achieve a working D2D2T procedure as it is for VMs?
- Chief Product Officer
- Posts: 31006
- Liked: 6413 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
The current File to Tape engine is simply not designed for file backup at a large scale. It's basically a by-product of Backup to Tape jobs, which were designed to export fewer of very large files (image-level Veeam backups) to tape. But then we thought, why not enable users to copy some other [large] backups to tape too, and do it for free? This is how the current File to Tape jobs were born.
In V12, File to Tape engine is completely redesigned for scalable file-level backup. Although V12 also natively support NAS Backup to Tape (directly restorable files, not vblobs) so there is no needs for workarounds in the first place.
Users browsing this forum: No registered users and 4 guests