For the past few years we have been backing up our file servers using standard VMware Backup which was ok. The regular health checks were taking days to complete which would overlap daily backups thus causing us to lose out on those days. We also ran into issues with trying to locate individual files to restore on backups archived to tape without restoring the whole multi-TB job. It was time-consuming.
We recently added the NAS Backup option to better handle our file backup situation on recommendation from a Veeam Healthcheck. I didn't know at the time that we couldn't send these jobs to tape, but I saw the other forum discussions saying it's coming, so that's good to hear.
We host multiple large Windows VM file servers. Right now there are 8 with anywhere from 5TB to 20TB on each. Around 90TB total. The nature of our data are tens of millions of general office-type files. All VMs share the same Dell Compellent 8Gb fiber storage. The connection from vSphere to the physical PowerEdge R740xd repository is only 1Gb ethernet.
At the moment, I've configured each VM to have its own file share backup job. Most file servers have 1 object (or folder), but we do have 1 VM with 10 objects/folders. Is this the recommended approach or should I be combining these files shares into combination jobs?
I ask, because the granularity for multiple backups during the day is not there in the settings. If I set the jobs to run every 6 hours, then all 8 jobs will run at the same time every 6 hours. It seems like it would be better to stagger these so they aren't all running at the same time and stressing the source and network.
Veeam is saying the bottleneck is the source at the moment. Also, I've noticed that since migrating to using NAS Backup, I have had to increase the VM resources on the file servers. Generally we run 4 vCPU and 4GB RAM, but I have needed to increase the RAM to 6-8GB and now I'm thinking about testing increasing the vCPUs to 6.
-
- Influencer
- Posts: 11
- Liked: 2 times
- Joined: Oct 13, 2015 3:46 pm
- Full Name: Richard Watt
- Contact:
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Multiple file shares and performance. Do I keep the jobs separate or join them? Best practices?
For the ease of management and reduced backup server resource consumption, it is best to group them all together. This will not change anything else in terms of performance, source or backup storage load, because each file share will be processed with its own dedicated task either way.
There's no granularity because Veeam scheduler controls it automatically and intelligently based on available resources (concurrent tasks slots on proxies and repositories). So with Veeam you should never worry about all jobs starting at once, in fact it is the recommended way. Because the scheduler will never have them doing backup all it once, but rather one by one (or a few at once) depending on available task slots. And you control this concurrency by defining available task slots.
As far as high file server load during backup, this is when the Backup I/O Control setting on the registered file share comes into play. You can reduce it to reduce the file server load at the cost of slower backups.
While manual staggering is indeed the typical first reaction from all folks coming from legacy backup solutions, it does not work well in real-world environments because it is impossible to say for certain how much each job will run on a given day (today a file share can have no changed files, and tomorrow thousands). So with manual staggering you end up either wasting your backup window, or with overlapping jobs because real world is never as ideal as nicely staggered schedule.
There's no granularity because Veeam scheduler controls it automatically and intelligently based on available resources (concurrent tasks slots on proxies and repositories). So with Veeam you should never worry about all jobs starting at once, in fact it is the recommended way. Because the scheduler will never have them doing backup all it once, but rather one by one (or a few at once) depending on available task slots. And you control this concurrency by defining available task slots.
As far as high file server load during backup, this is when the Backup I/O Control setting on the registered file share comes into play. You can reduce it to reduce the file server load at the cost of slower backups.
While manual staggering is indeed the typical first reaction from all folks coming from legacy backup solutions, it does not work well in real-world environments because it is impossible to say for certain how much each job will run on a given day (today a file share can have no changed files, and tomorrow thousands). So with manual staggering you end up either wasting your backup window, or with overlapping jobs because real world is never as ideal as nicely staggered schedule.
Who is online
Users browsing this forum: No registered users and 7 guests