Comprehensive data protection for all workloads
Post Reply
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Import backup without catalog data

Post by tsightler »

Is there any way to import a backup without importing the catalog data? We recently had to import a fairly large (>1Tb) backup with 23 VM's and 45 rollbacks. The actual import took maybe 10-15 minutes, but then the system spent the next 2-3 hours "stuck" on "Creating DB entries" but in reality it appeared to be importing all of the Catalog information from the backups (I watched both the CPU usage of the catalog service, as well as the directories being slowly created in VBRCatalog/Import. For a DR recovery, the catalog information is of no value, we just need to get the backup imported as quickly as possible to start the restores.

If there is no other way, perhaps a checkbox that simply allows you to disable import of catalog data during the Import wizard would be useful.
Gostev
Chief Product Officer
Posts: 31522
Liked: 6700 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Import backup without catalog data

Post by Gostev »

Good idea! Thank you Tom for bringing this issue to our attention.
tsightler
VP, Product Management
Posts: 6011
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Import backup without catalog data

Post by tsightler »

Another option might be to simply "background" that portion of the process. This came up during a DR test over the holidays. We attempted to import the offsite backup into a newly created Veeam instance and it worked fine, but took far too long before we were able to start the restore process and thus caused a significant finding in our DR plan audit. I wondered if just killing the Veeam Shell process, and then starting it back if it would have allowed us to continue.

We're actually now having to consider disabling the indexing altogether because, not only is the VRBCatalog HUGE (even with compression we're well over 80GB) but then it causes this import issue. The catalog is occasionally useful, but because of it's size is too slow of great value. Search server helps, but adds a lot of overhead and we still find we don't really use it all that much. Really, the design of this component needs to be improved. I'd suggest storing the indexed files directly in the SQL database linked to the backup jobs (most other backup software uses some form of database for the file index). That way duplicates could be removed (any given file would simply be linked to backup jobs which contain it, once those jobs were deleted the file would be removed from the index).

Yes, this would require a lot of effort, but the current setup just doesn't scale very well at all.
Post Reply

Who is online

Users browsing this forum: claudio.fortuna, d.artzen, Google [Bot] and 134 guests