Hello community,
We have recently started backing up our file shares with "FileBackup" to a Linux hardened repository. The backup files are then written to tape. So far I have set up file shares with around 120 TB and 210 million files.
The Veeam index is written to the configuration database, which has grown to 250 GB in a relatively short time.
Our file share backup will ultimately be 1PB and around 800-900 million files. Since we have to keep our backups on tape for several years, the index will have to contain data in the double-digit billion range. Our configuration database will therefore grow to several TB in the medium term. I don't know whether we will then run into any SQL Server limitations.
There are certainly some people out there who back up similarly large file shares. What are your experiences? What problems did you have and how did you solve them?
Veeam is a great product for VM backups, but unfortunately I can't currently imagine how I can make my file share backup with Veeam work.
Thanks and best regards
Andreas
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Apr 24, 2024 7:51 am
- Full Name: Andreas Eder
- Contact:
-
- Product Manager
- Posts: 14816
- Liked: 1771 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Issues with backing up huge File Shares
Hello Andreas,
Thank you!
To make sure we are on the same page - its only for file to tape or NAS backup to tape jobs. NAS backup to disk does not store any index in the catalog, instead it is using its own metadata files as index storage.The Veeam index is written to the configuration database, which has grown to 250 GB in a relatively short time.
Should not be an issue but I'd recommend moving to PostgreSQL as a production database, ideally a remote machine with the SSD storage used from the database itself.I don't know whether we will then run into any SQL Server limitations.
It's already possible to make the database more lightweight on the go but it comes at with a price. You can remove some tape media from the catalog, say, tapes that were archived from the long-term retention. This would remove all the data from the catalog but you will loose ability to quickly access the metadata related to data stored on such media (i.e. it wont be possible to see the files on such tapes). Next time you will be restoring from such tapes removed from a catalog you would need to import tapes back and catalog those (and it will be a time consuming procedure).What problems did you have and how did you solve them?
Thank you!
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Apr 24, 2024 7:51 am
- Full Name: Andreas Eder
- Contact:
Re: Issues with backing up huge File Shares
Hello Dima,
- Yes, we are on the same page. I'm talking about "NAS backup to tape jobs".
- We considered moving the configuration database to a remote maschine. What are the advantages of Postgres over MS SQL (apart from license costs)?
- Removing old tapes from the catalog sounds like a good idea.
Many regards
Andreas
- Yes, we are on the same page. I'm talking about "NAS backup to tape jobs".
- We considered moving the configuration database to a remote maschine. What are the advantages of Postgres over MS SQL (apart from license costs)?
- Removing old tapes from the catalog sounds like a good idea.
Many regards
Andreas
Who is online
Users browsing this forum: No registered users and 2 guests