Comprehensive data protection for all workloads
Post Reply
nwbc
Influencer
Posts: 20
Liked: 1 time
Joined: Feb 23, 2017 9:27 am
Full Name: nwbc

Best practice for ESXI Backup to Synology

Post by nwbc »

Hello,
i've been browsing the forum for a while and am currently uncertain how to configure my Backup the intended way. It works, but probably could be improved upon.

- I do have 2 ESXi Servers worth about 3 TB of VMs each
- A Windows Server 2016 Machine which Veeam is installed on (8700k)
- And a Synology (non ARM)

All hooked up to the same switch with a 1Gb Nic each.

I use 2 seperate daily incrementel Backup jobs, 1 for each ESXi Server - saving all the VMs directly to the Synology (proxy beeing the Veeam machine). Speeds are fine, but I would want to store the VMs in a Grandfather-Father-Son pattern. Currently I do have 14 "Restore points to keep on disk" which gives me a daily Backup of the last 14 days as expected.

Browsing for a solution I found out about the CopyJob - and ended up setting up a second repository on the same Synology, as I could not Copy inbetween the same Repository. The CopyJob does now indeed comunicate the whole Backup to the Windows Server 2016 machine - to then send it back to the Synology. :roll:

I do not actualy want to "Copy" older restore Point to a second destination, I just want to "keep" one full Backup each month for 6 Month to stay in the original Repository. I guess I can somehow add Config-information to the original Backup to make this happen (like adding a Secondary Target?) but have not yet figured out exactly how to correctly do that.

Can someone give me hint, or a link where something like this is explained? The Veeam helpcenter does not realy help much, as the goal there is a different one - and the result, while working, does not seem like a viable solution for me.

Thanks for your time
nwbc
Influencer
Posts: 20
Liked: 1 time
Joined: Feb 23, 2017 9:27 am
Full Name: nwbc

Re: Best practice for ESXI Backup to Synology

Post by nwbc »

As I can't edit my original post anymore now:
Actualy now, searching for different Topics, I guess I am already doing it the correct way. GFS Patten is not going to be implented in the Backup job itself, as that is not considered a long time Backup storage, and it would go against the design, as it is ecouraged to have multiple copys of the backup. Which imho is hiding the issue behind illogical arguments/claims. Why? If at all I'd have the resources to have a copy of my whole Backup repository, I'd want to have 2 Repositorys with ALL the Backups anyway. That is, keeping Backups from 6 months ago at my "local temporary" Backup storage as well.

Not having an external secondary Backup has nothing to do with being able to produce a GFS Pattern in the first place. By that logic, it would not be allowed to have 14 against 13 daily restore Points, because you have to have an external Backup Repository first. I don't want to be rude, I just totaly can't accept the arguing behind that.

...and this leading to admins moving files around manualy or per script(robocopy etc.) excluding the veeam backup software, is not realy how things should be done anway...
agoldenlife
Influencer
Posts: 17
Liked: 1 time
Joined: Mar 03, 2014 5:45 pm
Full Name: James Golden
Contact:

Re: Best practice for ESXI Backup to Synology

Post by agoldenlife »

Hey nwbc,

I have been digging into Veeam for the last several months and have learned quite a bit. I don't think I am an expert, but hopefully I can help a bit.

Overall the methodology I see in keeping with Best Practice (BP) 3,2,1 methodology (3 copies, 2 media types, 1 offsite) is that Veeam backup jobs are considered the 2nd copy, and to qualify for the 3rd copy, 2nd media type and 1 offsite you need to use another source for archiving that data. That could be cloud, tape, rotated hard drives, etc... I hope this is clear.

So in the case of Veeam you would use a Backup Copy Job to make that 3rd copy. You might consider using the Window Server (I am assuming it is a physcial server) as your primary backup storage and then a use a Backup Copy job to the Synology for the 3rd copy, but it does not meet the BP offsite requirement.

Some other notes that I hope helps
* Synology non ARM.. YES! I wish I had that. Mine is the ARM version.
* Think of restore points as a minimum, so you can easily end up with more. But you will always have 14 good restore points.
* You can also change your synology into a Linux backup Repository server which *may* help with performance.
* Be sure that you format the drives properly as that will also make an impact on the backup performance. I am having to deal with 4K size and it is making things quite slow. :)

Hope that helps some.
nwbc
Influencer
Posts: 20
Liked: 1 time
Joined: Feb 23, 2017 9:27 am
Full Name: nwbc

Re: Best practice for ESXI Backup to Synology

Post by nwbc »

Hey agoldenlife,

as for the synology, it is just a DS415+ (with 4 drives in RAID5), but we wanted something that would allow us all sorts of stuff later on. I also played around with the Synology OS as a VM with passed through drives/controllers for evaluation. Neat little boxes.

As for the Backup:
I am familiar with the 3,2,1 methodology. However, I would beg to differ that a copy of the backup, which is not the full backup, can't realy be considered matching these best practices to full extend. To each their own, but GFS is a well known, wide spred, much requested backup pattern almost available in every single Backup software.

I would want to set up a true, hassle free Version of 3-2-1

1. The actual Data.
2. The backup itself, directly produced by a backup Job in Veeam. All with monthly backups going back half a year.
3. I would create a copy of that backup on a second destination, over the internet to another synology.

However, I can not create that 2. Backup without messing around, wasting space, network and CPU ressources. I can create a second repository on the same Hardware, make a CopyJob, and in the end I will have data that somewhat resembles the GFS scheme - as instructed in this forum for everyone requesting GFS.

Like that:

1. the actual Data
"not quite 2". backup job with recent Points (about 14 maybe), but no older data
"not quite 3". Another backup Repository, containing older Backups, that had to be moved there in a ressource wasting fashion, even tough I did not want to move anything anyway yet, without the newer Backups
4. an offsite Copy, that by our requirements, would consist of 2 AND 3, requiring even more hassling around, double Backupfiles etc.

- This ain't safer, then what I proposed.
- All the data is still contained on the same drives.
- And In the same locations (despite them sitting in another Folder/Repository).

Why not just send the CopyJob to the synology then? Because I want ALL the Backup local, and ALL the backup offsite, and why wouldn't I if I have the space for it? I did not yet come across even one valid reason, why there is no option to create this. The 3. destination "CopyJob", should have nothing to do with aquiring the desired backups in a configureable detention scheme anyway.

All this hassle is due to
A:
Copy files ≠ Keep files as is
B:
Not being able to set a reasonable detention scheme in backup itself
C:
By no means, would a newbie know how to set GFS like behaviour with Veeam, without consulting manuals and forums. This is a thing most Backup products can teach within 10 sec at one GUI slice build in to the product itself.

In the end it is my fault for not validating, that the software has the features we require. I set up a basic Backup, played around with it, checked the restore process and was happy. Searching for GFS - finding that it is "just a step ahead" and requires some config, but will work as expected. Now I need to either fiddle around with files, creating scripts to move desired Backup files manualy, or explain to my boss, why I would need to buy severel 100€ worth of network equipment just because I want to keep/copy a File within the same location.
csydas
Expert
Posts: 193
Liked: 47 times
Joined: Jan 16, 2018 5:14 pm
Full Name: Harvey Carel
Contact:

Re: Best practice for ESXI Backup to Synology

Post by csydas »

Hey nwbc,

I don't recall it off the top of my head as I'm not in my office, but I believe Veeam has a regkey to allow you to use the same source/target for Backup Copies, and I believe they've promised more GFS style retention on normal backups in future releases. Ask support for the key.

As for your intended scheme, yeah, Veeam has a way of doing things in its current state, and it's mostly due to it's heavy reliance on transformative backups -- incremental backups and copies of these backups fall apart pretty quick when you have the chain constantly being updated at the base (full backup) (Reverse Incremental and/or Forever Forward Incremental does this), so it makes sense from a logistics perspective on why they do it the way they do.

However, if you're just wanting a periodic monthly backup, why not just a VeeamZip backup that's triggered by a simple powershell cmdlet in task scheduler? I know it's not ideal, but it 100% gives you exactly what you're looking for without the additional resources you're talking about.

Backup Copies are sort of weird, I'll grant you, but when you start to understand the design of it (move as little data as possible in order to still maintain full backups off-site), the design starts to make sense a bit what with the Synthetic Full creation voodoo.
nwbc
Influencer
Posts: 20
Liked: 1 time
Joined: Feb 23, 2017 9:27 am
Full Name: nwbc

Re: Best practice for ESXI Backup to Synology

Post by nwbc »

I like the backup copyJob as it will make it possible to upload backups with fairly limited upload speeds, because veeam is using the given resources well for this scenario - only needing to upload the changed data, even for creating new "Full Backups". I will certainly use it to move the backup to an off-site repository.

The VeeamZip option does not sound to thrilling. It'd work, but it is not less of a hassle then messing with the copy job, would require staff to know powershell if I am not around and would use CPU and network anyway to create. At that point, it would be more useful to just use a script to copy files (the oldest full backup) to another folder before it is consolidated, not requiring the creation of another backup by VeeamZip again. Whenever I change teh setup of my Jobs, I'd have to care for the scripts and schedules aswell.

The regkeys could be "the thing" indeed, however, I do not quite understand if Veeam might not get confused what backups belong where and such...do you have experience with this key? I just fear that it isn't "integrated well enough" causing issues etc. (unusual setup, I'd really want to keep it simple)

On another idea - because I've been reading about REFS a lot lately:
Our machines don't get changed a lot, increments from day to day require whopping 10-20GB atm on NTFS, (that is for about 20 VMs). As I do use a Windows Server 2016 OS for Veeam, couldn't I just pop in a drive, create a REFS repository and just let block cloning do its thing? I'd do 60 Restore points (mix in some fulls) and be done. I'd rather spend money on storage, then fiddle around with scripts or regkeys that will sooner or later cause issues or will be forgotten about. Would be a better use of ressources anyway, to save the backups on the machine that creates them.

I would have 32 GB of ram available for 8TB of REFS repository. Any thoughts on that idea?
csydas
Expert
Posts: 193
Liked: 47 times
Joined: Jan 16, 2018 5:14 pm
Full Name: Harvey Carel
Contact:

Re: Best practice for ESXI Backup to Synology

Post by csydas »

nwbc wrote: Sep 14, 2018 8:40 am The VeeamZip option does not sound to thrilling. It'd work, but it is not less of a hassle then messing with the copy job, would require staff to know powershell if I am not around and would use CPU and network anyway to create.

The regkeys could be "the thing" indeed, however, I do not quite understand if Veeam might not get confused what backups belong where and such...do you have experience with this key? I just fear that it isn't "integrated well enough" causing issues etc. (unusual setup, I'd really want to keep it simple)

On another idea - because I've been reading about REFS a lot lately:
Our machines don't get changed a lot, increments from day to day require whopping 10-20GB atm on NTFS, (that is for about 20 VMs). As I do use a Windows Server 2016 OS for Veeam, couldn't I just pop in a drive, create a REFS repository and just let block cloning do its thing? I'd do 60 Restore points (mix in some fulls) and be done. I'd rather spend money on storage, then fiddle around with scripts or regkeys that will sooner or later cause issues or will be forgotten about. Would be a better use of ressources anyway, to save the backups on the machine that creates them.

I would have 32 GB of ram available for 8TB of REFS repository. Any thoughts on that idea?
Why would it require staff to know powershell? You can throw together a script with a "UI" in an afternoon (if you have any interns or jr's needing practice, it's a fine project, but I realize that not all shops have the ability for such persons). A simple script that is inserted into the task scheduler would solve this fine, and an easy comment line of "Change this to the name of the VM to backup" should be enough. I'm also not quite getting the concern of the network use anyways, as you'd have to even with robocopy. The impact on production during a full backup, that I get, and I agree it's not ideal.

WRT the key, there is no issue from my experience -- it just suppresses the "check if the target == source" check that Backup Copy does. That + the option to "Read full from source" will negate the synthetic process and more or less just do an AF of the backup job. Keep in mind though, you still are going to need the gateway since you have your repository presented as CIFS. This is the major quirk in your set up, since with CIFS mode, Veeam won't put its data movers on the repository itself.

Since you have a non-ARM Synology, it means you have the option of adding it as a Linux Server, but you should be very sure you meet the system requirements, as moving data gets a bit heavy at times, and the system requirements are not just a recommendation from my experience. If you configure your jobs to avoid synthetic operations, you can drastically lower the required resources, naturally.

But right now you're more running into a configuration issue than anything -- Veeam has options to do what you want, but it will require a bit of restructuring of your set up.

On REFS, if you're able and willing to toss an 8TB drive on the Veeam server, go for it, imo. That puts you way closer to the "ideal" way of doing backups, and from the REFS repo, you can backup-copy to the Synology. If you're willing to go this route (and I think it's a better idea than trying to put everything on one Synology device), definitely do it. Your available RAM should be more than enough for your data-load.
nwbc
Influencer
Posts: 20
Liked: 1 time
Joined: Feb 23, 2017 9:27 am
Full Name: nwbc

Re: Best practice for ESXI Backup to Synology

Post by nwbc »

As for the extra powershell person, it was not meant to just execute or keep the system running - if we use a schedule no one would in theory need to touch it. We follow a model however where each task, even administrative ones, could be completely taken care of by at least a 2nd person. So if, for whatever reason, we need to change hardware or whatever, someone else would have to be able to create everything thats needed to configure the backup the way it was before. Obviously that can be done with instructions and saving configs and commands - though, then the person does not realy know what they are doing and can't be held responsible, or take over completely etc... just to mention, veeam isn't the only software we use for backing up our machines, and each bit of simplyfication helps. This realy is about keeping a system cleam without to much customizing or unusual practices.

The physical location, where I want the GFS is the same, as the backup repository Veeam is saving the backups to. If I'd schedule a script that moves files, I would make it so, that the machine that stores these files moves them, so there is no networking involved in moving them.
csydas wrote: Sep 16, 2018 8:31 am But right now you're more running into a configuration issue than anything -- Veeam has options to do what you want, but it will require a bit of restructuring of your set up.
Even with other hardware or another setup, the major problems and way to set up GFS in the first place would stay the same - from what I know, with resources not being used as they should.
csydas wrote: Sep 16, 2018 8:31 am On REFS, if you're able and willing to toss an 8TB drive on the Veeam server, go for it, imo. That puts you way closer to the "ideal" way of doing backups, and from the REFS repo, you can backup-copy to the Synology. If you're willing to go this route (and I think it's a better idea than trying to put everything on one Synology device), definitely do it. Your available RAM should be more than enough for your data-load.
I'l probably go that route, then using either a backup job of about 60 days or a copy job (gfs), and finaly moving this data over WAN to an off-site location. It will work, but I expected more.

Thx for the help, the forum is very informative.
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], CorruptedBackup, slackhouse and 137 guests