-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Nov 10, 2010 7:52 pm
- Full Name: RyanW
- Contact:
File to tape limitation
What's the file count limitation on this? I ran a tape job that backs up a bunch of small files from my SAN and eventually hit the 10GB limit on the SQL EXPRESS DB. I ended up uninstalling veeam, blowing away the bloated DB and re-loading veeam just to get back on track.
What I find odd about this is, Backup Exec backed these same datasets up to tape no problem, using SQL Express as it's DB as well. It had run for years and years, and when I retired that server I just figured I'd switch my tape jobs to veeam. Not looking promising at the moment. Best guess is cataloging all the files to the VeeamBackup DB the way it's being done is causing excessive bloat.
Any guidance here?
What I find odd about this is, Backup Exec backed these same datasets up to tape no problem, using SQL Express as it's DB as well. It had run for years and years, and when I retired that server I just figured I'd switch my tape jobs to veeam. Not looking promising at the moment. Best guess is cataloging all the files to the VeeamBackup DB the way it's being done is causing excessive bloat.
Any guidance here?
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: File to tape limitation
Hey Ryan,
OOC, what kind of file counts are you seeing for your job? We've been trying for months to nail down the expected usage for our F2T jobs (mostly we've just been living with Disk backups for File Restores), and we spawned a few million tiny files on some of our linux servers for testing.
With our lab testing, we couldn't really get the DB to budge in a dramatic way, and we were dealing with about 6 million files.
I don't think the file size matters (maybe it does?), but we are wanting to dive into this after getting some actual production numbers.
OOC, what kind of file counts are you seeing for your job? We've been trying for months to nail down the expected usage for our F2T jobs (mostly we've just been living with Disk backups for File Restores), and we spawned a few million tiny files on some of our linux servers for testing.
With our lab testing, we couldn't really get the DB to budge in a dramatic way, and we were dealing with about 6 million files.
I don't think the file size matters (maybe it does?), but we are wanting to dive into this after getting some actual production numbers.
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Nov 10, 2010 7:52 pm
- Full Name: RyanW
- Contact:
Re: File to tape limitation
For this particular job, I think it was a couple million files but TBH I'm not completely sure now as it never finished, filled up the VeeamBackup.mdf to 10GB and basically killed my Veeam installation. I had to uninstall Veeam, delete the MDF and LDF and re-install. I tried to prune the dbo.Tape* tables myself and it really went sideways after that. LOL
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: File to tape limitation
In cases like yours (millions of source files) we recommend using full-blown SQL server. Thanks!
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: File to tape limitation
@Mindflux Thanks for the datapoint, though there must be something else that affects this then because our "lab" tests are not consistent with this. I'm well over a "couple" million files and SQL express didn't bat an eye.
@veremin , does directory depth affect this calculation? I checked with my colleagues and we're 10 million files in without substantial impact on a SQL Express. We'd like to move this to production Fileshares to move to tape, but the recommendations are completely inconsistent with our testing.
Else, can you theorycraft why I'm only seeing DB growth in the MB range for 10 million files? We just used a bash one-liner to produce millions of 8 byte files in a flat directory; is it the single directory that makes our grow not so substantial? I looped over all of these files with touch and ran some incremental backups a few times, but I couldn't get the DB to budge in a way I consider significant (a few hundred MB at most)
@veremin , does directory depth affect this calculation? I checked with my colleagues and we're 10 million files in without substantial impact on a SQL Express. We'd like to move this to production Fileshares to move to tape, but the recommendations are completely inconsistent with our testing.
Else, can you theorycraft why I'm only seeing DB growth in the MB range for 10 million files? We just used a bash one-liner to produce millions of 8 byte files in a flat directory; is it the single directory that makes our grow not so substantial? I looped over all of these files with touch and ran some incremental backups a few times, but I couldn't get the DB to budge in a way I consider significant (a few hundred MB at most)
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: File to tape limitation
File size doesn't matter, nor does directory level(s). What really matters are number of files and number of file versions.
Say, you back up 10 files today, 5 which are changing daily, by the end of week you will have 40 records in product database:
10 (Monday) + 5 (changing files) * 6 (number of days, number of file versions) = 40 entities
It's impossible that 10 millions files do not have any impact on product database. 10 millions files occupy ~10 GB (SQL Express limit).
Thanks!
Say, you back up 10 files today, 5 which are changing daily, by the end of week you will have 40 records in product database:
10 (Monday) + 5 (changing files) * 6 (number of days, number of file versions) = 40 entities
It's impossible that 10 millions files do not have any impact on product database. 10 millions files occupy ~10 GB (SQL Express limit).
Thanks!
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Nov 10, 2010 7:52 pm
- Full Name: RyanW
- Contact:
Re: File to tape limitation
veremin wrote: ↑Nov 05, 2019 7:37 pm In cases like yours (millions of source files) we recommend using full-blown SQL server. Thanks!
That's not really a fix, though. It adds more to my infrastructure and Backup Exec as much as I dislike it, handled file to tape gracefully with SQL Express limitations. Hopefully Veeam V10 fixes this.
-
- Enthusiast
- Posts: 32
- Liked: never
- Joined: Nov 10, 2010 7:52 pm
- Full Name: RyanW
- Contact:
Re: File to tape limitation
Just for the record, I did a filecount on the data I was trying to back up.... I was way off.
Looks like 24,519,684 files at this point. But this data is pretty static so it's not like Veeam has to keep track of incremental changes or anything.
Looks like 24,519,684 files at this point. But this data is pretty static so it's not like Veeam has to keep track of incremental changes or anything.
-
- Product Manager
- Posts: 14726
- Liked: 1707 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: File to tape limitation
Hello Ryan,
Unfortunately we wont have any optimizations related to the amount of data processed by file to tape job when SQL Express is in use. The possible workaround for now would be to keep multiple file to tape jobs but with smaller amount of data (less than 1 million each), but that would not solve this problem completely. I'd add your vote to this feature request but for now the recommendation remains the same. Thank you!
Unfortunately we wont have any optimizations related to the amount of data processed by file to tape job when SQL Express is in use. The possible workaround for now would be to keep multiple file to tape jobs but with smaller amount of data (less than 1 million each), but that would not solve this problem completely. I'd add your vote to this feature request but for now the recommendation remains the same. Thank you!
Who is online
Users browsing this forum: No registered users and 3 guests