We have 4 servers (hyper-v host, referred to as "bare metal"). Each of these servers has a number of VMs (between 1 and , an 8TB NVME disk with live data, 8TB SSD for backups. We also have backblaze as S3-compatible repository.
We want to keep the ability to move these VMs around. Ideally each VM should be backed-up to the backup disk on the same bare metal, and afterwards uploaded to backblaze.
Currently only one of the servers is configured. It's configured using a local repository and external repository, added to a scale-out repository. The console is installed within a VM, and an agent is installed on the bare metal.
Questions:
1) If I add two local repositories to the performance tier of a scale-out repository (placement policy "data locality") and a remote repository to the capacity tier, will they both local repositories receive a copy of the data, or will only one hold the data?
2) What is the recommended architecture in our case? Our requirements:
- For errors within the VMs, local restore is preferred.
- For loss of a bare metal, restore from backblaze is allowed.
- Local backups should not be duplicated across multiple servers.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: May 08, 2023 9:13 am
- Contact:
-
- Product Manager
- Posts: 9716
- Liked: 2565 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: Multiple servers - how to arrange backups
Hi CoachR
If you use a Hyper-V cluster or manage your hosts with SCVMM, the VM ID stays the same and no new backup will be created.
My recommendation is to use a single dedicated repository server outside of your HyperV hosts. Managing the jobs to only write backups to the disks on the same host will work, but will lead to a lot of active fulls.
And with new active full backup chains after each move, you may run out of storage rather sooner than later.
If having a single repository server is no option because you already invested in the hardware, then I suggest to create a SOBR with data locality, and add the local disk from each of the HyperV hosts to it.
Manage the HyperV Hosts as a Cluster or with SCVMM to keep the same VM ID. When you move the VM to another host, the backups would still go to the original repository. With reFS (FastClone aware repository) and synthentic full backups, you can save a lot of storage.
Best,
Fabian
Moving the VMs around will lead to new VM IDs. Each time you move a VM, the job will create a new backup chain for that moved VM. This will affect storage usage on your local disks and backblaze.We want to keep the ability to move these VMs around. Ideally each VM should be backed-up to the backup disk on the same bare metal, and afterwards uploaded to backblaze.
If you use a Hyper-V cluster or manage your hosts with SCVMM, the VM ID stays the same and no new backup will be created.
Only one. Data Locality means, full backup and incremental backups are on the same extend.1) If I add two local repositories to the performance tier of a scale-out repository (placement policy "data locality") and a remote repository to the capacity tier, will they both local repositories receive a copy of the data, or will only one hold the data?
Restore from object storage is always possible in case you loose your backup server. Just power on a new backup server and add Backblaze again as a repository or capacity tier.2) What is the recommended architecture in our case? Our requirements:
- For errors within the VMs, local restore is preferred.
- For loss of a bare metal, restore from backblaze is allowed.
- Local backups should not be duplicated across multiple servers.
My recommendation is to use a single dedicated repository server outside of your HyperV hosts. Managing the jobs to only write backups to the disks on the same host will work, but will lead to a lot of active fulls.
And with new active full backup chains after each move, you may run out of storage rather sooner than later.
If having a single repository server is no option because you already invested in the hardware, then I suggest to create a SOBR with data locality, and add the local disk from each of the HyperV hosts to it.
Manage the HyperV Hosts as a Cluster or with SCVMM to keep the same VM ID. When you move the VM to another host, the backups would still go to the original repository. With reFS (FastClone aware repository) and synthentic full backups, you can save a lot of storage.
Best,
Fabian
Product Management Analyst @ Veeam Software
Who is online
Users browsing this forum: No registered users and 10 guests