-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Very small deployment
I have (1) Windows Server 2022 running (1) Hyper-V VM. That 1 VM is a critical production workload.
There are also 5 or 6 standard office computers that I would backup using the Veeam agent.
The Hyper-V VM is around 300GB, and the office computers are each around 150GB.
So, compared to a lot of you with hundreds of VMs and hundreds of TB of data, this setup here is tiny. I don't need blazing fast networking, or huge amounts of storage, or de-duplication, because I'm backing up so little that it shouldn't matter. Probably a monthly full, and a daily incremental for the VM and each workstation. I'm looking at less than 2TB for an entire month of backups.
Here is my plan, please let me know if this is a sane plan based on the information I provided above:
* Use Veeam Essentials with a 5-pack VUL
* Install Veeam as an all-in-one installation on Windows 11 Pro on a Dell Optiplex Workstation
* OS and Veeam applications will be on the built-in SSD NVMe
* The Veeam backup storage will be on (2) 22TB Seagate EXOS Sata drives, mirrored as RAID-1 NTFS for a total of 22TB, inside the same workstation
* Standard 1GB Network
Again, I realize that this is far below what most of you are doing, but for my description above, is this a sane plan?
Thank you.
There are also 5 or 6 standard office computers that I would backup using the Veeam agent.
The Hyper-V VM is around 300GB, and the office computers are each around 150GB.
So, compared to a lot of you with hundreds of VMs and hundreds of TB of data, this setup here is tiny. I don't need blazing fast networking, or huge amounts of storage, or de-duplication, because I'm backing up so little that it shouldn't matter. Probably a monthly full, and a daily incremental for the VM and each workstation. I'm looking at less than 2TB for an entire month of backups.
Here is my plan, please let me know if this is a sane plan based on the information I provided above:
* Use Veeam Essentials with a 5-pack VUL
* Install Veeam as an all-in-one installation on Windows 11 Pro on a Dell Optiplex Workstation
* OS and Veeam applications will be on the built-in SSD NVMe
* The Veeam backup storage will be on (2) 22TB Seagate EXOS Sata drives, mirrored as RAID-1 NTFS for a total of 22TB, inside the same workstation
* Standard 1GB Network
Again, I realize that this is far below what most of you are doing, but for my description above, is this a sane plan?
Thank you.
-
- Veeam Software
- Posts: 2741
- Liked: 630 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Very small deployment
Hi joloo,
Doing some quick napkin math on the data transfer, looks like it'd be a backup window of about 4 hours, so if that works for you I think it should be pretty alright. Nothing seems to unusual here -- NTFS means no Fast Clone, so maybe consider reviewing if you can work ReFS (or a small linux machine for XFS) into your idea. Main benefits with XFS and ReFS are Fast Clone which will help with space savings and also merge/synthetic full times. The merge/synthetic full times are where I would suggest test in advance on the repository to get an idea of the random IO performance just so you can have a reasonable expectation on what you'll see with the repository.
You can use Diskspd to get an idea of the performance of your proposed repository. The Active Full, Synthetic Full/Merge test, and Restore test should give you a pretty good preview of performance, test with about 50 GiB of data to get a "good" check.
Otherwise not sure I see any major issues immediately.
Doing some quick napkin math on the data transfer, looks like it'd be a backup window of about 4 hours, so if that works for you I think it should be pretty alright. Nothing seems to unusual here -- NTFS means no Fast Clone, so maybe consider reviewing if you can work ReFS (or a small linux machine for XFS) into your idea. Main benefits with XFS and ReFS are Fast Clone which will help with space savings and also merge/synthetic full times. The merge/synthetic full times are where I would suggest test in advance on the repository to get an idea of the random IO performance just so you can have a reasonable expectation on what you'll see with the repository.
You can use Diskspd to get an idea of the performance of your proposed repository. The Active Full, Synthetic Full/Merge test, and Restore test should give you a pretty good preview of performance, test with about 50 GiB of data to get a "good" check.
Otherwise not sure I see any major issues immediately.
David Domask | Product Management: Principal Analyst
-
- Service Provider
- Posts: 598
- Liked: 150 times
- Joined: Apr 03, 2019 6:53 am
- Full Name: Karsten Meja
- Contact:
Re: Very small deployment
it is not 3-2-1. where is the backup copy going? i hope offsite.
and why do you plan to backup your CRITICAL production workload once a day? is it possible to fill in the lost data of a day by hand and how much will it cost?
and why do you plan to backup your CRITICAL production workload once a day? is it possible to fill in the lost data of a day by hand and how much will it cost?
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
That would be a worthwhile upgrade. I will look into replacing Windows Pro with either Windows Server or a supported Linux distro to have ReFS/XFS and still keeping everything as an all-in-one setup.david.domask wrote: ↑Aug 15, 2024 9:41 am NTFS means no Fast Clone, so maybe consider reviewing if you can work ReFS (or a small linux machine for XFS) into your idea. Main benefits with XFS and ReFS are Fast Clone which will help with space savings and also merge/synthetic full times. The merge/synthetic full times are where I would suggest test in advance on the repository to get an idea of the random IO performance just so you can have a reasonable expectation on what you'll see with the repository.
Thank you!
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
The critical data on the VM is in MS-SQL. I am backing up sql (using SQL backup scripts) every 2 hours, and sending that backup to a local usb drive, and to a network file share, and sending it to an online storage provider. So, data loss (at the time of failure) could potentially be 2 hours of data. That isn't terrible, but I might revisit that in the future.karsten123 wrote: ↑Aug 15, 2024 10:46 am it is not 3-2-1. where is the backup copy going? i hope offsite.
and why do you plan to backup your CRITICAL production workload once a day? is it possible to fill in the lost data of a day by hand and how much will it cost?
Is it practical to run Veeam Backup multiple times per day? Could I run an incremental Veeam every 15 minutes? Or would that be way too many restore points?
-
- Veeam Software
- Posts: 2741
- Liked: 630 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Very small deployment
> Is it practical to run Veeam Backup multiple times per day? Could I run an incremental Veeam every 15 minutes? Or would that be way too many restore points?
Sure, it's quite fine and normal. The limiting factor here is more the backup environment itself (hardware, network, etc). I don't think anything except testing will tell you the answer on this, but if the environment can handle it, then there's no restriction from Veeam.
However, remember backups require snapshots, so every 15 minutes means a lot of workload for the host. With SQL, you can configure Transaction Log Backups with 15 minute intervals (and even lower if you like), and this will give you the Point in Time restore capabilities without having tons of snapshots. The log backups can be further copied with Backup Copy jobs as well to meet 3-2-1. Basically, the image level (VM backup) will ensure there's a full copy of the database backed up that the Transaction Log backups will be associated with.*
So I would consider that as opposed to doing backups every 15 minutes, even if they're likely to be fast incremental ones.
*Note: All Microsoft SQL database backups are done in Full mode, regardless of whether the Veeam backup is Full or Incremental. It might be a little confusing, but just understand that any backup of the VM itself with Application Aware Processing enabled will have a full _database_ backup within the backup file.
Sure, it's quite fine and normal. The limiting factor here is more the backup environment itself (hardware, network, etc). I don't think anything except testing will tell you the answer on this, but if the environment can handle it, then there's no restriction from Veeam.
However, remember backups require snapshots, so every 15 minutes means a lot of workload for the host. With SQL, you can configure Transaction Log Backups with 15 minute intervals (and even lower if you like), and this will give you the Point in Time restore capabilities without having tons of snapshots. The log backups can be further copied with Backup Copy jobs as well to meet 3-2-1. Basically, the image level (VM backup) will ensure there's a full copy of the database backed up that the Transaction Log backups will be associated with.*
So I would consider that as opposed to doing backups every 15 minutes, even if they're likely to be fast incremental ones.
*Note: All Microsoft SQL database backups are done in Full mode, regardless of whether the Veeam backup is Full or Incremental. It might be a little confusing, but just understand that any backup of the VM itself with Application Aware Processing enabled will have a full _database_ backup within the backup file.
David Domask | Product Management: Principal Analyst
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
Ok, that is a good point.david.domask wrote:remember backups require snapshots, so every 15 minutes means a lot of workload for the host.
So, Veeam would do this, without a Hyper-V snapshot, instead of me scripting the SQL backup. That's good to know.david.domask wrote:you can configure Transaction Log Backups with 15 minute intervals (and even lower if you like), and this will give you the Point in Time restore capabilities without having tons of snapshots.
However, what about databases that are in simple mode? Simple mode doesn't use Transaction Logs. Can those also be backed up without a snapshot?
Ok, so just to confirm, every time I do an incremental VM backup, Veeam will trigger my SQL server to do a entire full backup, correct? Therefore, every incremental VM backup will be at least as large as the entire SQL data, correct?david.domask wrote:Note: All Microsoft SQL database backups are done in Full mode ... any backup of the VM itself with Application Aware Processing enabled will have a full _database_ backup within the backup file
-
- Veeam Software
- Posts: 2741
- Liked: 630 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: Very small deployment
1. Aha, your idea isn't necessarily bad and I have seen it done in larger environments, but they usually have pretty heavy resources, and backing up the actual application data directly is typically the better choice.
2. Correct, Simple mode databases would not be eligible for Transaction Log backups. If the data is quite important, consider discussing with the database administrator about how they're handling log backups now and consider letting Veeam manage it.
3. Correct that it will be a full _database_ backup, but it will not be necessarily the same size as an Active Full backup. When I talk about database backups, I speak of the types of Backups SQL natively can perform for the backups. You will still only be backing up data returned by CBT from the Hypervisor, but for SQL purposes, it's a Full backup. I think discuss this with your DBA a bit more and give a read through the linked article, but to be clear, no you should _not_ expect that SQL incremental backups are always going to be big.
2. Correct, Simple mode databases would not be eligible for Transaction Log backups. If the data is quite important, consider discussing with the database administrator about how they're handling log backups now and consider letting Veeam manage it.
3. Correct that it will be a full _database_ backup, but it will not be necessarily the same size as an Active Full backup. When I talk about database backups, I speak of the types of Backups SQL natively can perform for the backups. You will still only be backing up data returned by CBT from the Hypervisor, but for SQL purposes, it's a Full backup. I think discuss this with your DBA a bit more and give a read through the linked article, but to be clear, no you should _not_ expect that SQL incremental backups are always going to be big.
David Domask | Product Management: Principal Analyst
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
Ok, that is great information. I will continue backing up the application data throughout the day and then do daily VM backups with Veeam.
Now, regarding using XFS, which I have read up on a bit today and it seems great. However, Veeam Backup Server, and the Console, cannot be installed on Linux, right? Therefore an All-in-One install on Linux for the purpose of using XFS is not possible. I would have to put Server and Console on Windows, and the repository on Linux (if I wanted to use XFS). Or, I assume I could use 1 single physical server with proxmox (because proxmox does physical disk passthrough), and install windows server in a VM for the server and linux in a VM for the repo. Or I could just use 2 physial servers. Either way, I do like what I have read about XFS.
Now, regarding using XFS, which I have read up on a bit today and it seems great. However, Veeam Backup Server, and the Console, cannot be installed on Linux, right? Therefore an All-in-One install on Linux for the purpose of using XFS is not possible. I would have to put Server and Console on Windows, and the repository on Linux (if I wanted to use XFS). Or, I assume I could use 1 single physical server with proxmox (because proxmox does physical disk passthrough), and install windows server in a VM for the server and linux in a VM for the repo. Or I could just use 2 physial servers. Either way, I do like what I have read about XFS.
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
So, after the feedback I have received, and some additional research on various things, here is my new tentative plan. It jumped up quite a bit, but this seems to be the minimum way to get XFS format and ECC memory. Unfortunately there seems to be no way to get ECC ram without an enterprise-grade CPU.
1 single physical server:
* Veeam Essentials
* 12 Core (24 Thread) Xeon with 32GB RAM ECC running Proxmox Hypervisor.
**** Inside Proxmox, 2 VMs:
**** VM1 = 12GB ram, 8 vcpu - Veeam Backup and Console on either Windows Pro or Windows Essentials Server
**** VM2 = 12GB ram, 8 vcpu - Veeam Proxy and Repository on Linux with (2) 22TB Seagate EXOS Sata drives RAID-1 passthrough to VM, formatted as XFS
* Standard 1GB Network
Is this reasonable for my small workload? Is this overkill?
1 single physical server:
* Veeam Essentials
* 12 Core (24 Thread) Xeon with 32GB RAM ECC running Proxmox Hypervisor.
**** Inside Proxmox, 2 VMs:
**** VM1 = 12GB ram, 8 vcpu - Veeam Backup and Console on either Windows Pro or Windows Essentials Server
**** VM2 = 12GB ram, 8 vcpu - Veeam Proxy and Repository on Linux with (2) 22TB Seagate EXOS Sata drives RAID-1 passthrough to VM, formatted as XFS
* Standard 1GB Network
Is this reasonable for my small workload? Is this overkill?
-
- Enthusiast
- Posts: 49
- Liked: 3 times
- Joined: Oct 24, 2018 6:15 pm
- Contact:
Re: Very small deployment
I'm also interested in this topic, as I am also managing Veeam installations with at most 5 VMs and a few workstations.
I currently use a single Windows Server for Backup, Console, Proxy and Repository. (In very small installations just WIndows 11 Pro as in the first post.)
I just had the idea that it might be an alternative to install Linux on the physical server and use that one as Proxy and Repo directly. Then run a virtual machine inside of it (using virsh, i.e. libvirt, qemu and kvm), install Windows Pro inside of it and use it for Backup and the Console.
Then, the Repo would run "bare metal". Also, managing one single VM from the command line using virsh is not much of a hassle.
I currently use a single Windows Server for Backup, Console, Proxy and Repository. (In very small installations just WIndows 11 Pro as in the first post.)
I just had the idea that it might be an alternative to install Linux on the physical server and use that one as Proxy and Repo directly. Then run a virtual machine inside of it (using virsh, i.e. libvirt, qemu and kvm), install Windows Pro inside of it and use it for Backup and the Console.
Then, the Repo would run "bare metal". Also, managing one single VM from the command line using virsh is not much of a hassle.
-
- Influencer
- Posts: 12
- Liked: 5 times
- Joined: Nov 03, 2020 1:29 pm
- Full Name: Ryan
- Contact:
Re: Very small deployment
@joloo,
I'm not Proxmox proficient, so can't comment on that part. I would just add the following:
1) If your DR plan includes running the VM on the VBR server while the original hardware is repaired/replaced, make sure the proposed storage has enough IOPS for that (e.g. if the server has SSD and hits it hard and you plan to recover to the RAID 1 w/7200 RPM spinners you may have a problem).
2) Consider Wasabi as part of a SOBR for the VM or if you just want the SQL data offsite, back up the needed part of the network file share you are already sending the SQL dumps to. Wasabi is cheap and configurable as immutable.
3) Not sure if it's less expensive or not, but the W680 chipset supports unbuffered ECC with supported desktop chips. I have a few running specialised workloads with E cores disabled. You can get 8 fast "P" cores and ECC support and IPMI that way. Enough for a small server and the cheaper Xeons (unless you go used) tend to have very low frequencies which can hurt performance if you don't need all the cores or mass amounts of RAM (192 GB max) or RAM bandwidth (limited to 2 channels). See, e.g. https://www.supermicro.com/en/products/ ... ard/x13sae
I'm not Proxmox proficient, so can't comment on that part. I would just add the following:
1) If your DR plan includes running the VM on the VBR server while the original hardware is repaired/replaced, make sure the proposed storage has enough IOPS for that (e.g. if the server has SSD and hits it hard and you plan to recover to the RAID 1 w/7200 RPM spinners you may have a problem).
2) Consider Wasabi as part of a SOBR for the VM or if you just want the SQL data offsite, back up the needed part of the network file share you are already sending the SQL dumps to. Wasabi is cheap and configurable as immutable.
3) Not sure if it's less expensive or not, but the W680 chipset supports unbuffered ECC with supported desktop chips. I have a few running specialised workloads with E cores disabled. You can get 8 fast "P" cores and ECC support and IPMI that way. Enough for a small server and the cheaper Xeons (unless you go used) tend to have very low frequencies which can hurt performance if you don't need all the cores or mass amounts of RAM (192 GB max) or RAM bandwidth (limited to 2 channels). See, e.g. https://www.supermicro.com/en/products/ ... ard/x13sae
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
I never considered using the VBR server as a temporary VM Hypervisor. That's an interesting idea though, it at least forces me to think about what I would want to do in the case of a hardware outage. I've been mainly focusing on virus/software outages.Entropy wrote: 1) If your DR plan includes running the VM on the VBR server while the original hardware is repaired/replaced, make sure the proposed storage has enough IOPS for that (e.g. if the server has SSD and hits it hard and you plan to recover to the RAID 1 w/7200 RPM spinners you may have a problem).
I've been using BackBlaze B2, but I will definitely look into Wasabi... especially if VBR has built in support for it.Entropy wrote: 2) Consider Wasabi as part of a SOBR for the VM or if you just want the SQL data offsite, back up the needed part of the network file share you are already sending the SQL dumps to. Wasabi is cheap and configurable as immutable.
Nice! Thank you for that. I thought that consumer (non-Xeon) CPUs did not support ECC. This is very interesting to find out, this would certainly be significantly less expensive than a Dell PowerEdge. I am going to look into this.Entropy wrote: 3) Not sure if it's less expensive or not, but the W680 chipset supports unbuffered ECC with supported desktop chips. I have a few running specialised workloads with E cores disabled. You can get 8 fast "P" cores and ECC support and IPMI that way. Enough for a small server and the cheaper Xeons (unless you go used) tend to have very low frequencies which can hurt performance if you don't need all the cores or mass amounts of RAM (192 GB max) or RAM bandwidth (limited to 2 channels). See, e.g. https://www.supermicro.com/en/products/ ... ard/x13sae
-
- Enthusiast
- Posts: 30
- Liked: never
- Joined: Jul 04, 2024 9:21 pm
- Contact:
Re: Very small deployment
...continued
Curious why you disable the E-cores. It is my understanding that the kernel of the hypervisor will assign P/E cores as needed automatically. It seems that disabling E cores would simply remove additional processing power that could potentially be used as needed.Entropy wrote: I have a few running specialised workloads with E cores disabled.
-
- Influencer
- Posts: 12
- Liked: 5 times
- Joined: Nov 03, 2020 1:29 pm
- Full Name: Ryan
- Contact:
Re: Very small deployment
The box is dedicated to a specialized CFD workload that scales across cores very poorly - on a 2 channel RAM system not past 6-8 cores. Thus to ensure best performance, I disable the E-cores (frees up some power budget for the P cores too).Curious why you disable the E-cores. It is my understanding that the kernel of the hypervisor will assign P/E cores as needed automatically. It seems that disabling E cores would simply remove additional processing power that could potentially be used as needed.
This is the "Veeam in a box" approach. I don't have any statistics, but pretty common from what I gather @ small businesses.I never considered using the VBR server as a temporary VM Hypervisor. That's an interesting idea though, it at least forces me to think about what I would want to do in the case of a hardware outage. I've been mainly focusing on virus/software outages.
Historically there have been specialized applications within Intel's lineup where specific desktop class chips support ECC on specific chipsets. e.g. I have an old Dell T330 with a i3-6100 for a small physical workload with Unbuffered ECC.I thought that consumer (non-Xeon) CPUs did not support ECC.
Who is online
Users browsing this forum: No registered users and 3 guests