Host-based backup of VMware vSphere VMs.
Post Reply
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Best way to achieve backup scenario

Post by sogapex »

Hello,

Here is what I would like to setup to backup our infrastructure (ideally) :

3 backups during day (10am, 12am, 16am)
1 backup during night
Keep these backups for 2 weeks

Keep one daily backup for 150 days
Keep monthly backup for 12 months
Keep yearly backup for 2 years


What is the best way to achieve this ?
It seems I can't achieve this with only one job. If I need 2 jobs, I suppose it is "tricky" to setup this as far as "CBT reset" is concerned. (maybe disable "reset CBT" for the second job and adjust the settings so that both job make their respective full backup the same "day" ?)

Thanks by advance
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur » 1 person likes this post

Hi sogapex

- CBT won't be an issue with multiple jobs. Multiple Jobs can use the same CBT information stored in vSphere.
- CBT is only reset when doing an active full. Synthetic fulls won't do a CBT reset.

You can solve your scenario with a backup job and a backup copy job.
3 backups during day (10am, 12am, 16am)
1 backup during night
Keep these backups for 2 weeks
You can create a backup job and run it periodically every 2 hours. Use the schedule wizard to limit run times between 10am-2pm, 4pm-6pm and 10pm-0am). Backup Jobs will only run to the time you specified. And once per night at 10pm.
Configure Retention to two weeks.
Keep one daily backup for 150 days
Keep monthly backup for 12 months
Keep yearly backup for 2 years
For this requirement, create a backup copy job which will run in "periodic copy" mode. Schedule the copy job to run after the backup job session at 10PM. Periodic Copy Jobs will only copy the latest restore point.
Configure the short term retention to 150 days and the GFS retetion to 12 monthly and 2 yearly. I also recommend considering weekly synthetic fulls.

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

I already love your solution.
I just discover that with V12, we can specify the retention in days instead of retore points, which is great.
I didn't know we could run "periodic copy" : this is also great.

Can you explain me what means : "Read the entire restore point from source backup instead of synthesizing it from increments" ?

To be more specific for our case :
I am currently setting up 2 new back up servers to replace our current backup infrastructure =
- B&R server with 4x20To hdd (windows storage space, "simple"), windows server 2019, ntfs
- the same physical machine but with a "hardened" linux repository this time (XFS)

one full backup = about 1.5To
with the new server, it takes less than 1 hour to backup all our vms (less than 10 minutes to do an incremental)
All production VMs are running on SSD datastore

And so, it takes less time to do an active full backup (ssd to hdd, sequential write) than doing a synthetic full backup (random read/write hdd only).
Currently, we are running one active full backup per week (we are also running one full backup per night on rotating removal devices with another job).

So, in the main job, I can keep doing one active full backup per week ?
But in the copy job, do I have to check something to make "full backup" ?
Another point = it seems we can't put the copy job on the same repo as the main job ? in such case, can I create another repo on the same windows volume especially for the copyjob ?

Finally : I would like to backup all of this on the hardened repo (maybe only the same data as the copy job, I suppose this would be sufficient).
And so, Is it possible to create a second "copy job" from the first "copy job" ?
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur »

I just discover that with V12, we can specify the retention in days instead of retore points, which is great.
I believe we have that since Version 10 for backup jobs, and version 11 for backup copy jobs.
Read the entire restore point from source backup instead of synthesizing it from increments
Making an active Full copy instead of a synthentic full copy.
- B&R server with 4x20To hdd (windows storage space, "simple"), windows server 2019, ntfs
- the same physical machine but with a "hardened" linux repository this time (XFS)
Windows server should use reFS. Then you can do spaceless and fast synthentic fulls (FastClone).
On NTFS, synthentic fulls read and copy blocks to create a new FullBackup file. On reFS, existing blocks are only referenced and not copied to new blocks. Less required space. Faster Synthentic Full because no existing blocks must be copied.
So, in the main job, I can keep doing one active full backup per week ?
Yes, doesn't matter as long as you are on NTFS. On reFS, I would do synthetic fulls.
But in the copy job, do I have to check something to make "full backup" ?
When you configure GFS retention, those restore points will be automatically a full backup file.
Synthetic Full Restore Points, if you leave "Read the entire restore point from source backup instead of synthesizing it from increments" disabled.
I would do synthetic Fulls, if you copy them to the hardened repository. Again, FastClone with XFS :) Gives you spaceless full backups.
Another point = it seems we can't put the copy job on the same repo as the main job ? in such case, can I create another repo on the same windows volume especially for the copyjob ?
Copy Jobs copy to other repositories, correct. You can create a new repository on the same server which must point to a different folder.
Finally : I would like to backup all of this on the hardened repo (maybe only the same data as the copy job, I suppose this would be sufficient).
And so, Is it possible to create a second "copy job" from the first "copy job" ?
V12 won't allow you to do a backup copy from a backup copy. We will bring it back in a future patch/version, but no ETA yet.

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

does the copy job always use the genuine job repo as source ? or does it read from production datastore sometimes ? (example : to make an active full)

reFS = I read at several places that it was not as reliable than ntfs. Maybe this is not relevant anymore ? (or not relevant in our case). Example : performance dégradation over time ? restore performance compared to ntfs ?

I could setup again the server is needed.
What would be the better way to make use of our 2 new physical backup servers ?
taking into account the needs of the first post of this topic (4 backups per day for 2 weeks, keep one backup per day for 150 day, monthly backup for 12 months and yearly for 2 years)
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur »

does the copy job always use the genuine job repo as source ? or does it read from production datastore sometimes ? (example : to make an active full)
Copy Jobs don't read data from production storage. They read restore points from the source backup repository.
reFS = I read at several places that it was not as reliable than ntfs. Maybe this is not relevant anymore ? (or not relevant in our case). Example : performance dégradation over time ? restore performance compared to ntfs ?
I remember some bigger issues years ago. They were fixed by Microsoft. We have a long topic somewhere in the forums.
I you have an enterprise grade raid controller in your server, I recommend testing out reFS.
What would be the better way to make use of our 2 new physical backup servers ?
If you want to go with immutability, use at least one of them as a hardened repository with XFS.
The other server I would set up with windows and an reFS volume (64KB cluster size) for the backup storage.

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

This is a lot of questions, but :

how does the "copy job" makes an active full backup" from the source backup repo ?
We don't have an enterprise grade raid controller (this is a consumer platform, using windows storage space feature to aggregate hdd in simple mode = somehow a software RAID 0)
I am doing some tests regarding backup time and restore time, before making a definitive choice and replacing the old backup infrastructure.

If reFS is ok, this would look like this :
main backup = what you say = 4 backups per day, retention set to 15 days (active full backup once per week we are still a little bit conservative about synthetic full because it would mean if one corruption occurs, everything after that is corrupt).
then, on the same server : backup periodic copy (weekly synthetic full, 150 days, 12 month and 2 years GFS)
then on the linux hardened repo : same as above = backup copy job again of the main job
=> is it possible to set up 2 backup copy jobs of the same main job ?

thanks a lot
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Here are some result from my tests :
NTFS file system
VM size = 1To (our largest VM)

restore with nbd mode from full backup = 38 min
restore with hotadd mode from full backup = 39 min

restore with nbd mode from 10 chained-backup files = 49 min
restore with hotadd mode from 10 chained-backup files = 39 min

I re-created the repository and job with reFS yesterday.
I will update with reFS times when I got enough data
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur »

Hello sogapex

Thank you.
Really appreciate your testing. We are always glad to hear about real world results and comparisons.
how does the "copy job" makes an active full backup" from the source backup repo ?
It will create a new full backup from blocks available on the source repository.
We don't have an enterprise grade raid controller (this is a consumer platform, using windows storage space feature to aggregate hdd in simple mode = somehow a software RAID 0)
For reFS you should use a stable storage configuration, which can survive a power outage. A battery backed write Cache is a must. But it works without it. Just the possibility for a corrupted ReFS volume is much higher.
main backup = what you say = 4 backups per day, retention set to 15 days (active full backup once per week we are still a little bit conservative about synthetic full because it would mean if one corruption occurs, everything after that is corrupt).
then, on the same server : backup periodic copy (weekly synthetic full, 150 days, 12 month and 2 years GFS)
If you only do Active Fulls, you don't need ReFS. ReFS is for synthetic fulls.
=> is it possible to set up 2 backup copy jobs of the same main job ?
Yes :)

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

It will create a new full backup from blocks available on the source repository
and so : this is the same as a synthetic full then ? (in the case of the copy job)
For reFS you should use a stable storage configuration, which can survive a power outage. A battery backed write Cache is a must. But it works without it. Just the possibility for a corrupted ReFS volume is much higher.
This server is protected by its own UPS. And this is also why I want to set up the second backup server (the linux one = both hardened and "backup of the backup" in case something goes wrong with the first one)
If you only do Active Fulls, you don't need ReFS. ReFS is for synthetic fulls.
the "main" job retention time is "short", and so, there is no real benefit using "synthetic full" to gain space. (we are talking of max 3 full backups here)
But : the copy job has a long retention time (2 yearly + 12 monthly + 150 daily = a lot of "full backups" if I set up one full backup per week => 35 full ? I don't know if weekly backup and monthly backup can share the same file here)
I hope I understood well and that I will benefit from reFS as far as the copy job is concerned ? (synthetic full backup here if I don't check the "Read the entire restore point from source backup instead of synthesizing it from increments" checkbox of the copy job)

If everyrhing goes well, I will create another copy job to copy from first backup server to second one

regards,
Michel
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Hi again,

here is some more data.

My hardened repository is "up" (a little bit complicated to set up with ubuntu 22.04 LTS "minimized" with ethernet interface "bonding" => not working at installation time, and "no way" to set it up after that)
My new Veeam B&R server is replacing 90% of the old one.

I just "migrate" the "air proof" backup job from the old the new server yesterday.
This is a simple backup job doing a full backup on an sata 3.5" drive connected with usb to the B&R server (rotated backup = new drive everyday, formatted with default Windows server 2019 ntfs settings each time)
It worked "normally" this night with the new server except I get "wrong figures" in the email report
Here is the last "old" report : (Windows server 2012R2 / B&R V12)
Image

And here is the new one : (Windows server 2019 / B&R V12)
Image

When I look directly on the drive, the backup files are weighting 1.36To (nothing else on the drive, it was formatted yesterday)
How is it possible to read a "backup size" of 17.9GB when the "Transferred" size is 1.4TB ?

I think the only difference between the 2 repo/jobs = "use per-machine backup file" for the new one
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Here are some result from my "reFS" tests :
reFS file system (64ko)
VM size = 1To (our largest VM)

restore with nbd mode from full backup = 36 min
restore with hotadd mode from full backup = 34 min

restore with nbd mode from 10 chained-backup files = 50 min
restore with hotadd mode from 10 chained-backup files = 41 min

so far, there is no significant change as far as I can see (compared to ntfs)

---------
HARDENED REPO

restore with hotadd mode from full backup = 46 min
restore with hotadd mode from full backup (W) = 60 min

Question : what means the "(W)" after the "full" when I choose a restore point from the hardened repository ?
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur »

Hello

Thanks for sharing your results.
When I look directly on the drive, the backup files are weighting 1.36To (nothing else on the drive, it was formatted yesterday)
How is it possible to read a "backup size" of 17.9GB when the "Transferred" size is 1.4TB ?
That doesn't look expected. You may open a support case. Logs should tell us why you see this huge difference.
Question : what means the "(W)" after the "full" when I choose a restore point from the hardened repository ?
W means it's tagged as a weekly backup. You must have configured GFS retention in your backup/backup copy job.
and so : this is the same as a synthetic full then ? (in the case of the copy job)
Correct.
But : the copy job has a long retention time (2 yearly + 12 monthly + 150 daily = a lot of "full backups" if I set up one full backup per week => 35 full ?
I hope I understood well and that I will benefit from reFS as far as the copy job is concerned ? (synthetic full backup here if I don't check the "Read the entire restore point from source backup instead of synthesizing it from increments" checkbox of the copy job)
Yes, such job will benefit from reFS or XFS filesystems.
I don't know if weekly backup and monthly backup can share the same file here)
Weekly and Monthly backups scheduled on the same days will use the (M) flag. No need to flag it as weekly, when you write a monthly backup which is kept much longer. But in your example, you don't have enabled the weekly flag. SO there won

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

That doesn't look expected. You may open a support case. Logs should tell us why you see this huge difference.
This is known visual bug, it will be fixed in the next patch.
W means it's tagged as a weekly backup. You must have configured GFS retention in your backup/backup copy job.
understood. And yes, On the Hardened repository, I added some weekly in GFS retention (100 days, 25 weeks, 12 month)
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Hi, here are some more data : today, I conducted a restore test to get my main server + sql server + 2x ad servers "online" (isolated network).
4 VMs, 9 virtual disks
Restore size = 299.4GB + 391.4GB + 144.1GB + 54.5GB + 40.5GB + 195.8GB + 108.4GB + 22.5GB + 23.3GB = 1279.9GB
One target server and 2 datastores (both nvme ssd)
Restore point selected from full backup to get the max restore speed
Backup source = 4x20To Seagate EXOS (soft-RAID0, reFS)
Job started at 7h35
Job finished at 8h10
Avg Restore speed = 1279.9GB in 35min = 624MB/s

This is really great. It means we could restore the whole organisation in less than 1 hour (PROD = 3 target servers, 7 VMs, 1885GB)
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Second test =
Same data, but restore source = Hardened repository (linux, ubuntu, xfs, 4x20To seagate EXOS soft-RAID0)
Restore point selected = most recent (17 "increments" away from the true full backup. There are some synthetic full backup in-between)
Job started at 11h22
Job finished at 12h50 (not all VMs finished at the same time, this is the lastest one)
Avg speed restore = 1279.6GB in 88min = 248MB/s

This is also pretty good. Let's say less than 2 hours to restore all the VMs (this is a "worst case scenario")
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

After some times and fiddling with the jobs/parameters, our "backup schedule" is almost definitive now.
I guess I can say there is "room" for improvement on Veeam side :
* being able to trigger a "copy job" after "another copy job" would be great (next veeam major version ?)
* Since I have a job that I want to run 3 times a day and then once by night, it would be really, really, really great to be able to trigger a copy job after it, but only by night (currently, we need to finely adjust launch time to reduce the total backup window, but this is not really convenient = in fact, we can't know the exact duration of a particular job. Moreover, when there is a full backup, the duration is not the same at all).
* and finally, for GFS, there are yearly, monthly and weekly backup : I would like to have a "daily" GFS backup => in such case, no need anymore to set up a "copy job" of the main job to achieve our objective (4 backups per day but keep only one daily backup for XX days)

With this 3 possibilities, my job list and configuration would be far more simplier (no brainache to dispatch all the jobs), it would be more robust to change in data volume and my total backup window would be optimal (reduced at its minimum)
actually, I have 10 jobs and only 3 are "chained" (job2 is triggered after job1, and job3 is triggered after job2)
with the 3 suggestions : this would be = 9 jobs and 6 "chained" jobs (3+3) and a total backup window reduced by 1h30 in the longest case (saturday)

Thanks to provide us with the best tool to achieve all our backup need, the best way.
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Hi,
new improved scheduling = (after reading @Hannes comment on this topic = veeam-backup-replication-f2/v13-wish-list-t86471.html)
3x Prod backup during working hours (job "prod1") => all days, but only once the saturday and sunday => keep 15 days
1x Prod backup during night (job "prod2") => all days except monday (no backup done for sunday) => keep for 150 days + GFS = 30 weekly, 12 monthly, 2 yearly
1x CopyHardened => mirror "prod2" job => keep for 100 days + GFS = 25 weekly, 12 monthly, 2 yearly
1x "tape" job (usb3 interface with sata drive) => run after "prod1" job (would be better after "CopyHardened", but this is not possible currently to trigger a job after a copy job). With this trigger, the "friday" tape receives 2 backups (the second one is not necessary, but this is not possible to tell the job not to run specific days)

The "chaining" of jobs is far better like this.
First run this night =
prod2 (full) during 37min
CopyHardened launched 20min after the start of prod2, duration = 29min
TapeJob launched right after prod2 finished, duration = 59min (cheating here = sata ssd instead of sata hdd)
Total duration of the 3 jobs = 1h37min (2 full backups from production esxi + one copy from main backup infra to secondary backup infra)
So, during some time, Prod2 and CopyHardened run concurrently and then, TapeJob and CopyHardened run concurrently. (not a big deal in our case, but would be better with the possibility to run a job after a backup copy job. In such a case, it would be ideal to run CopyHardened after prod2 and then TapeJob after CopyHardened)

PS : some people would think = what the point trying to get better time since this is already good ? My answer = I can have people working from 06:00am to 23:00pm, and I need to find some time for maintenance, software update and so on. The shorter the backup window, the longer I can work without preventing users to work.
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Best way to achieve backup scenario

Post by Mildur »

Hi Sogapex

I apologize for the late answer.
First let me say thanks for your additional tests and providing us feedback.
I will have to read through those numbers soon :)
* being able to trigger a "copy job" after "another copy job" would be great (next veeam major version ?)
It's an open feature request. Unfortunately I cannot provide any ETA yet.
* Since I have a job that I want to run 3 times a day and then once by night, it would be really, really, really great to be able to trigger a copy job after it, but only by night (currently, we need to finely adjust launch time to reduce the total backup window, but this is not really convenient = in fact, we can't know the exact duration of a particular job. Moreover, when there is a full backup, the duration is not the same at all).
Generally said, we don't recommend chaining backup jobs. If one job fails in the chain, subsequent jobs will fail too.
You can use chaining, but you need to monitor it closely :)
* and finally, for GFS, there are yearly, monthly and weekly backup : I would like to have a "daily" GFS backup => in such case, no need anymore to set up a "copy job" of the main job to achieve our objective (4 backups per day but keep only one daily backup for XX days)
Let's see if we get other requests for it. I don't remember any requests till yours for daily GFS restore points.

Best,
Fabian
Product Management Analyst @ Veeam Software
sogapex
Enthusiast
Posts: 78
Liked: 12 times
Joined: Aug 21, 2018 5:33 am
Contact:

Re: Best way to achieve backup scenario

Post by sogapex »

Hi Fabian,

with the 2 "prod" backup jobs scheduling I set up (day and night), I think this is quite acceptable and it adresses most (all) "problems" :
- daily GFS backup = ok, this is the night job retention time
- the change of transport mode according to day/night = ok, day job = nbd and night job = hot-add.
- there is only one copy job now = no need to be able to trigger a copy job after another one
- the copy job only works by night since it copies the night job
Generally said, we don't recommend chaining backup jobs. If one job fails in the chain, subsequent jobs will fail too
what ? 😮
You mean if my night prod job fails, the tape job will not be triggered at all ? is it really how things work ?
I suppose this is not so serious since both are relying on the same production datastores : which means that if the first job fails, there are high chances the second would fail too

Data from this night =
Prod2 (inc) during 7min32
CopyHardened launched 3min after the start of prod2, duration = 14min15s
TapeJob launched right after prod2 finished, duration = 2h07 (sata hdd)
Total duration of the 3 jobs = 2h15min (1 full + 1 inc backup from production esxi + one copy from main backup infra to secondary backup infra)

I started working for this company 5 years ago : this was a long way to achieve that. At this time, the backup windows was starting at 20h30 and finishing at 8am ! (only one full "tape" backup and one inc backup on NAS). And this was a lot longer the week end when the NAS was receiving the full backup (20+ hours...)
The Veeam Backup server was featuring a celeron cpu ! (DELL T130 Server). Approximately 1/ 3 backups were failing each day...
Now the backup size is increased by 50%, there are more backups per day, more retention, better protection and it takes far less time to process.
The restore possibility is also far better => before, it was only a matter of being able to restore a file, now, we can restore a whole VM if needed, and we can also restore all the VMs on a "spare" server to test the whole thing (in a confortable time = less than 4hours).

Veeam B&R does the job well (I can even say very well looking at the stats and jobs history)
But the job scheduling and the hardware have to be adequately chosen and set.
When chosing my new Veeam server parts, I thought a 8c/16t cpu would be "overkill", but in fact, for veeam365, this is rather a requirement, and more important : when doing restore, the more horsepower we can provide, the better (single thread and multi thread). An Amd 7900X would suit this role perfectely.
1. do not skimp on the Veeam server cpu (and RAM, if Veeam365 => 32GB). A "server" cpu with a lot of cores but a small singlethread power is not recommended (physical server and cpu turbo @more than 4.5GHZ recommended => this is one reason why it is not always the best plan to make the Veeam server a VM on one of the production host)
2. one Linux proxy VM on each physical host (my word "guide" to set up a new linux proxy = only 2 pages. Less than 30min to do that). Its free, it consumes only 24GB of datastore space => put it on the faster one, and it can increase your backup/restore speed by a lot.
3. 10Gbps network between hosts, Veeam Server and any physical device involve in backup/retore process (1Gbps is a "waste of time" when things become serious, even for a small company like ours - less than 100 people -)
4. spend some time to fiddle with jobs scheduling and settings to find the best "combo" to achieve your backup objectives in the less amount of time. You will not regret it the day you have to restore urgently, or when you have to do maintenance outside of working hours : it can be the difference between having to come during weekend or not 😄
Post Reply

Who is online

Users browsing this forum: No registered users and 55 guests