Host-based backup of VMware vSphere VMs.
Post Reply
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Questions on my setup

Post by backupquestions »

Repo=physical server running windows server 2016. The local disks on this system form a drive which is the main initial Veeam repo. Backups are saved here with 6 weeks retention for all VMs.

Next, I have a drive connected via iSCSI to this server which is from a SAN that exists in another datacenter. This is my second Veeam repo and this is formatted in ReFS. I have a backup copy job that contains just two VMs which are file server VMs. This job sends 2 basic restore points to this repo but also is set for GFS and set to 6 monthlies for long term file retrieval requests that we get.

So here are my thoughts....

1. In my local disk storage all my VMs including these file servers are only 6 weeks retention. The 6 monthlies for those file servers do not exist in this repo. So that specific data is only in one place. I'm now thinking maybe my setup is not great and I should have the full retention amount in both repositories but that brings me to my next question.
2. Aside from not having the space to do this in my main local disk storage repo, a regular Veeam backup job (non backup copy) does not include GFS settings for keeping archival stuff. It seems that you are only expected to keep small window of backups on your first repo, OR, you would literally have to put in say 180 restore points in the simple retention policy and then I'd think you would almost have to use synthetic fulls a lot to keep the chain from being huge??
3. So is it typical that a company would set up as I have where your main repo only has say anywhere from 1 week to 6 weeks retention, and then their GFS stuff is only set to go to a dedupe appliance or whatever other repository they have? So certain types of data is only stored on one thing at a time?
4. Last question here and this is diff topic I guess. My main internal disk storage repo is formatted NTFS. This was before we got Veeam. I'd really love to change this to ReFS but we are talking about 14TB of data and I'd basically have to find some kind of storage and robocopy all this data to it and then format the drive and then restore it all back.... Yet we run daily backups to this repo so I mean during all the hours it would take to do this project we would not be able to do our regular backup jobs.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Questions on my setup

Post by foggy »

Hi Derek, your observations are mostly correct. Regarding first three points in your post, having a smaller set of restore points on the primary repository with the archive sent to a dedupe device is actually a best practice that is described in our reference architecture. If your policies require different RPO, you need to think about having two backup copy jobs, for example. As for the fourth point, I do not see a question there - yes, you would need to migrate backups to reformat disks and you wouldn't be able to run jobs to that storage while the procedure is underway. But for ReFS benefits to take effect, you would need to perform active full backups anyway, so you can temporarily re-point your jobs to another repository and then start everything from scratch on ReFS.
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Questions on my setup

Post by backupquestions »

Thanks foggy. Could I ask one more quick question?

My primary repo (those internal DAS on the server) has Windows Dedupe enabled on the volume that Veeam's vbk and vib files are on. I'm not sure whether it is helping me because the space I'm saving is not enough to where I could get more than one full vbk on it anyway, and now with your best practice it seems like I should not actually want to dedupe the Veeam backup files on this primary repo. That would be more useful to use on the secondary repo (but in my case that is refs with server 2016 so dedupe doesn't apply anyway).

So would I be better off to just undedupe all the veeam files on my primary repo? I'm assuming the file level restore browser would open much faster and instant recovery would be faster as well.

*EDIT* Let me note the following... Would I see any difference in CBT or in my backup copy to the cloud connect repo jobs by turning off windows dedupe on the Veeam backup files repo? I'm thinking it should have no effect on anything other than the speed of restores from the first repo as the guest VMs do not have dedupe enabled in windows.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Questions on my setup

Post by foggy »

Any read operations (especially random read, like FLR/IR) will benefit from undeduped backups.
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Questions on my setup

Post by backupquestions »

Thanks. Ok just to be sure could you address my last note? If I turn off dedupe on the volume containing the veeam backup files themselves (vbk, vib) then I would not expect to see any change in my next backup job, or in my next backup copy job to the cloud right? No CBT huge changes where now my next cloud backup will take days etc that kind of thing. Only expected difference is speedier restores essentially.
foggy
Veeam Software
Posts: 21069
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Questions on my setup

Post by foggy » 1 person likes this post

Right, your understanding is correct.
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Questions on my setup

Post by backupquestions »

Thank you Foggy, appreciate the help!!
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

Re: Questions on my setup

Post by backupquestions »

Ok so I started an "unoptimize" job on the volume with my vbk/vib files to start it rehydrating... But now it caused a problem. Our veeam backup copy job that copies this data to the cloud failed as the windows dedupe service is putting a lock on the files. I have no idea how long it might take for windows to finish re hydrating. It is 14TB before dedupe and 9 after. So this could take hours or even days and that would mean that I would not be able to even take regular daily backups to this primary repo during this time....

So far from googling I am reading that I can stop the rehydration job and start it later and it would resume where it left off, but then that would mean I only have a few hours per day where either a backup copy or regular backup job are not running that i could let it re hydrate.... So I might be stuck with dedupe on i guess, not sure? Any ideas?
Post Reply

Who is online

Users browsing this forum: uszy and 95 guests